Oct 11 03:37:00 localhost kernel: Linux version 5.14.0-621.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025
Oct 11 03:37:00 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 11 03:37:00 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 11 03:37:00 localhost kernel: BIOS-provided physical RAM map:
Oct 11 03:37:00 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 11 03:37:00 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 11 03:37:00 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 11 03:37:00 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 11 03:37:00 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 11 03:37:00 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 11 03:37:00 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 11 03:37:00 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 11 03:37:00 localhost kernel: NX (Execute Disable) protection: active
Oct 11 03:37:00 localhost kernel: APIC: Static calls initialized
Oct 11 03:37:00 localhost kernel: SMBIOS 2.8 present.
Oct 11 03:37:00 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 11 03:37:00 localhost kernel: Hypervisor detected: KVM
Oct 11 03:37:00 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 11 03:37:00 localhost kernel: kvm-clock: using sched offset of 4081408101 cycles
Oct 11 03:37:00 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 11 03:37:00 localhost kernel: tsc: Detected 2800.000 MHz processor
Oct 11 03:37:00 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 11 03:37:00 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 11 03:37:00 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 11 03:37:00 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 11 03:37:00 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 11 03:37:00 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 11 03:37:00 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 11 03:37:00 localhost kernel: Using GB pages for direct mapping
Oct 11 03:37:00 localhost kernel: RAMDISK: [mem 0x2d858000-0x32c23fff]
Oct 11 03:37:00 localhost kernel: ACPI: Early table checksum verification disabled
Oct 11 03:37:00 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 11 03:37:00 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 11 03:37:00 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 11 03:37:00 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 11 03:37:00 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 11 03:37:00 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 11 03:37:00 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 11 03:37:00 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 11 03:37:00 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 11 03:37:00 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 11 03:37:00 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 11 03:37:00 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 11 03:37:00 localhost kernel: No NUMA configuration found
Oct 11 03:37:00 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 11 03:37:00 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Oct 11 03:37:00 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 11 03:37:00 localhost kernel: Zone ranges:
Oct 11 03:37:00 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 11 03:37:00 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 11 03:37:00 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 11 03:37:00 localhost kernel:   Device   empty
Oct 11 03:37:00 localhost kernel: Movable zone start for each node
Oct 11 03:37:00 localhost kernel: Early memory node ranges
Oct 11 03:37:00 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 11 03:37:00 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 11 03:37:00 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 11 03:37:00 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 11 03:37:00 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 11 03:37:00 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 11 03:37:00 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 11 03:37:00 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 11 03:37:00 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 11 03:37:00 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 11 03:37:00 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 11 03:37:00 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 11 03:37:00 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 11 03:37:00 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 11 03:37:00 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 11 03:37:00 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 11 03:37:00 localhost kernel: TSC deadline timer available
Oct 11 03:37:00 localhost kernel: CPU topo: Max. logical packages:   8
Oct 11 03:37:00 localhost kernel: CPU topo: Max. logical dies:       8
Oct 11 03:37:00 localhost kernel: CPU topo: Max. dies per package:   1
Oct 11 03:37:00 localhost kernel: CPU topo: Max. threads per core:   1
Oct 11 03:37:00 localhost kernel: CPU topo: Num. cores per package:     1
Oct 11 03:37:00 localhost kernel: CPU topo: Num. threads per package:   1
Oct 11 03:37:00 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 11 03:37:00 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 11 03:37:00 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 11 03:37:00 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 11 03:37:00 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 11 03:37:00 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 11 03:37:00 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 11 03:37:00 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 11 03:37:00 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 11 03:37:00 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 11 03:37:00 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 11 03:37:00 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 11 03:37:00 localhost kernel: Booting paravirtualized kernel on KVM
Oct 11 03:37:00 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 11 03:37:00 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 11 03:37:00 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 11 03:37:00 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Oct 11 03:37:00 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Oct 11 03:37:00 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 11 03:37:00 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 11 03:37:00 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64", will be passed to user space.
Oct 11 03:37:00 localhost kernel: random: crng init done
Oct 11 03:37:00 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 11 03:37:00 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 11 03:37:00 localhost kernel: Fallback order for Node 0: 0 
Oct 11 03:37:00 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 11 03:37:00 localhost kernel: Policy zone: Normal
Oct 11 03:37:00 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 11 03:37:00 localhost kernel: software IO TLB: area num 8.
Oct 11 03:37:00 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 11 03:37:00 localhost kernel: ftrace: allocating 49162 entries in 193 pages
Oct 11 03:37:00 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 11 03:37:00 localhost kernel: Dynamic Preempt: voluntary
Oct 11 03:37:00 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 11 03:37:00 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 11 03:37:00 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 11 03:37:00 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 11 03:37:00 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 11 03:37:00 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 11 03:37:00 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 11 03:37:00 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 11 03:37:00 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 11 03:37:00 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 11 03:37:00 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 11 03:37:00 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 11 03:37:00 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 11 03:37:00 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 11 03:37:00 localhost kernel: Console: colour VGA+ 80x25
Oct 11 03:37:00 localhost kernel: printk: console [ttyS0] enabled
Oct 11 03:37:00 localhost kernel: ACPI: Core revision 20230331
Oct 11 03:37:00 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 11 03:37:00 localhost kernel: x2apic enabled
Oct 11 03:37:00 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 11 03:37:00 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 11 03:37:00 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct 11 03:37:00 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 11 03:37:00 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 11 03:37:00 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 11 03:37:00 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 11 03:37:00 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 11 03:37:00 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 11 03:37:00 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 11 03:37:00 localhost kernel: RETBleed: Mitigation: untrained return thunk
Oct 11 03:37:00 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 11 03:37:00 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 11 03:37:00 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 11 03:37:00 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 11 03:37:00 localhost kernel: x86/bugs: return thunk changed
Oct 11 03:37:00 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 11 03:37:00 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 11 03:37:00 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 11 03:37:00 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 11 03:37:00 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 11 03:37:00 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 11 03:37:00 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 11 03:37:00 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 11 03:37:00 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 11 03:37:00 localhost kernel: landlock: Up and running.
Oct 11 03:37:00 localhost kernel: Yama: becoming mindful.
Oct 11 03:37:00 localhost kernel: SELinux:  Initializing.
Oct 11 03:37:00 localhost kernel: LSM support for eBPF active
Oct 11 03:37:00 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 11 03:37:00 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 11 03:37:00 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 11 03:37:00 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 11 03:37:00 localhost kernel: ... version:                0
Oct 11 03:37:00 localhost kernel: ... bit width:              48
Oct 11 03:37:00 localhost kernel: ... generic registers:      6
Oct 11 03:37:00 localhost kernel: ... value mask:             0000ffffffffffff
Oct 11 03:37:00 localhost kernel: ... max period:             00007fffffffffff
Oct 11 03:37:00 localhost kernel: ... fixed-purpose events:   0
Oct 11 03:37:00 localhost kernel: ... event mask:             000000000000003f
Oct 11 03:37:00 localhost kernel: signal: max sigframe size: 1776
Oct 11 03:37:00 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 11 03:37:00 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 11 03:37:00 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 11 03:37:00 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 11 03:37:00 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 11 03:37:00 localhost kernel: smp: Brought up 1 node, 8 CPUs
Oct 11 03:37:00 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct 11 03:37:00 localhost kernel: node 0 deferred pages initialised in 11ms
Oct 11 03:37:00 localhost kernel: Memory: 7765716K/8388068K available (16384K kernel code, 5784K rwdata, 13864K rodata, 4188K init, 7196K bss, 616220K reserved, 0K cma-reserved)
Oct 11 03:37:00 localhost kernel: devtmpfs: initialized
Oct 11 03:37:00 localhost kernel: x86/mm: Memory block size: 128MB
Oct 11 03:37:00 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 11 03:37:00 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 11 03:37:00 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 11 03:37:00 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 11 03:37:00 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 11 03:37:00 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 11 03:37:00 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 11 03:37:00 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 11 03:37:00 localhost kernel: audit: type=2000 audit(1760153819.382:1): state=initialized audit_enabled=0 res=1
Oct 11 03:37:00 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 11 03:37:00 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 11 03:37:00 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 11 03:37:00 localhost kernel: cpuidle: using governor menu
Oct 11 03:37:00 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 11 03:37:00 localhost kernel: PCI: Using configuration type 1 for base access
Oct 11 03:37:00 localhost kernel: PCI: Using configuration type 1 for extended access
Oct 11 03:37:00 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 11 03:37:00 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 11 03:37:00 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 11 03:37:00 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 11 03:37:00 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 11 03:37:00 localhost kernel: Demotion targets for Node 0: null
Oct 11 03:37:00 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 11 03:37:00 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 11 03:37:00 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 11 03:37:00 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 11 03:37:00 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 11 03:37:00 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 11 03:37:00 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 11 03:37:00 localhost kernel: ACPI: Interpreter enabled
Oct 11 03:37:00 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 11 03:37:00 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 11 03:37:00 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 11 03:37:00 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 11 03:37:00 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 11 03:37:00 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 11 03:37:00 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [3] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [4] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [5] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [6] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [7] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [8] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [9] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [10] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [11] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [12] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [13] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [14] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [15] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [16] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [17] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [18] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [19] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [20] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [21] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [22] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [23] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [24] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [25] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [26] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [27] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [28] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [29] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [30] registered
Oct 11 03:37:00 localhost kernel: acpiphp: Slot [31] registered
Oct 11 03:37:00 localhost kernel: PCI host bridge to bus 0000:00
Oct 11 03:37:00 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 11 03:37:00 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 11 03:37:00 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 11 03:37:00 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 11 03:37:00 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 11 03:37:00 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 11 03:37:00 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 11 03:37:00 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 11 03:37:00 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 11 03:37:00 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 11 03:37:00 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 11 03:37:00 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 11 03:37:00 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 11 03:37:00 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 11 03:37:00 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 11 03:37:00 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 11 03:37:00 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 11 03:37:00 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 11 03:37:00 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 11 03:37:00 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 11 03:37:00 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 11 03:37:00 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 11 03:37:00 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 11 03:37:00 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 11 03:37:00 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 11 03:37:00 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 11 03:37:00 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 11 03:37:00 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 11 03:37:00 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 11 03:37:00 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 11 03:37:00 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 11 03:37:00 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 11 03:37:00 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 11 03:37:00 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 11 03:37:00 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 11 03:37:00 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 11 03:37:00 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 11 03:37:00 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 11 03:37:00 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 11 03:37:00 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 11 03:37:00 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 11 03:37:00 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 11 03:37:00 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 11 03:37:00 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 11 03:37:00 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 11 03:37:00 localhost kernel: iommu: Default domain type: Translated
Oct 11 03:37:00 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 11 03:37:00 localhost kernel: SCSI subsystem initialized
Oct 11 03:37:00 localhost kernel: ACPI: bus type USB registered
Oct 11 03:37:00 localhost kernel: usbcore: registered new interface driver usbfs
Oct 11 03:37:00 localhost kernel: usbcore: registered new interface driver hub
Oct 11 03:37:00 localhost kernel: usbcore: registered new device driver usb
Oct 11 03:37:00 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 11 03:37:00 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 11 03:37:00 localhost kernel: PTP clock support registered
Oct 11 03:37:00 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 11 03:37:00 localhost kernel: NetLabel: Initializing
Oct 11 03:37:00 localhost kernel: NetLabel:  domain hash size = 128
Oct 11 03:37:00 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 11 03:37:00 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 11 03:37:00 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 11 03:37:00 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 11 03:37:00 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 11 03:37:00 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Oct 11 03:37:00 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 11 03:37:00 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 11 03:37:00 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 11 03:37:00 localhost kernel: vgaarb: loaded
Oct 11 03:37:00 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 11 03:37:00 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 11 03:37:00 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 11 03:37:00 localhost kernel: pnp: PnP ACPI init
Oct 11 03:37:00 localhost kernel: pnp 00:03: [dma 2]
Oct 11 03:37:00 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 11 03:37:00 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 11 03:37:00 localhost kernel: NET: Registered PF_INET protocol family
Oct 11 03:37:00 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 11 03:37:00 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 11 03:37:00 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 11 03:37:00 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 11 03:37:00 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 11 03:37:00 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 11 03:37:00 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 11 03:37:00 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 11 03:37:00 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 11 03:37:00 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 11 03:37:00 localhost kernel: NET: Registered PF_XDP protocol family
Oct 11 03:37:00 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 11 03:37:00 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 11 03:37:00 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 11 03:37:00 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 11 03:37:00 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 11 03:37:00 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 11 03:37:00 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 11 03:37:00 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 11 03:37:00 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 70924 usecs
Oct 11 03:37:00 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 11 03:37:00 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 11 03:37:00 localhost kernel: software IO TLB: mapped [mem 0x00000000a7600000-0x00000000ab600000] (64MB)
Oct 11 03:37:00 localhost kernel: ACPI: bus type thunderbolt registered
Oct 11 03:37:00 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 11 03:37:00 localhost kernel: Initialise system trusted keyrings
Oct 11 03:37:00 localhost kernel: Key type blacklist registered
Oct 11 03:37:00 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 11 03:37:00 localhost kernel: zbud: loaded
Oct 11 03:37:00 localhost kernel: integrity: Platform Keyring initialized
Oct 11 03:37:00 localhost kernel: integrity: Machine keyring initialized
Oct 11 03:37:00 localhost kernel: Freeing initrd memory: 85808K
Oct 11 03:37:00 localhost kernel: NET: Registered PF_ALG protocol family
Oct 11 03:37:00 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 11 03:37:00 localhost kernel: Key type asymmetric registered
Oct 11 03:37:00 localhost kernel: Asymmetric key parser 'x509' registered
Oct 11 03:37:00 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 11 03:37:00 localhost kernel: io scheduler mq-deadline registered
Oct 11 03:37:00 localhost kernel: io scheduler kyber registered
Oct 11 03:37:00 localhost kernel: io scheduler bfq registered
Oct 11 03:37:00 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 11 03:37:00 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 11 03:37:00 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 11 03:37:00 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 11 03:37:00 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 11 03:37:00 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 11 03:37:00 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 11 03:37:00 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 11 03:37:00 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 11 03:37:00 localhost kernel: Non-volatile memory driver v1.3
Oct 11 03:37:00 localhost kernel: rdac: device handler registered
Oct 11 03:37:00 localhost kernel: hp_sw: device handler registered
Oct 11 03:37:00 localhost kernel: emc: device handler registered
Oct 11 03:37:00 localhost kernel: alua: device handler registered
Oct 11 03:37:00 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 11 03:37:00 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 11 03:37:00 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 11 03:37:00 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 11 03:37:00 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 11 03:37:00 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 11 03:37:00 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 11 03:37:00 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-621.el9.x86_64 uhci_hcd
Oct 11 03:37:00 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 11 03:37:00 localhost kernel: hub 1-0:1.0: USB hub found
Oct 11 03:37:00 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 11 03:37:00 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 11 03:37:00 localhost kernel: usbserial: USB Serial support registered for generic
Oct 11 03:37:00 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 11 03:37:00 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 11 03:37:00 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 11 03:37:00 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 11 03:37:00 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 11 03:37:00 localhost kernel: rtc_cmos 00:04: registered as rtc0
Oct 11 03:37:00 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 11 03:37:00 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-11T03:36:59 UTC (1760153819)
Oct 11 03:37:00 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 11 03:37:00 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 11 03:37:00 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 11 03:37:00 localhost kernel: usbcore: registered new interface driver usbhid
Oct 11 03:37:00 localhost kernel: usbhid: USB HID core driver
Oct 11 03:37:00 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 11 03:37:00 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 11 03:37:00 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 11 03:37:00 localhost kernel: Initializing XFRM netlink socket
Oct 11 03:37:00 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 11 03:37:00 localhost kernel: Segment Routing with IPv6
Oct 11 03:37:00 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 11 03:37:00 localhost kernel: mpls_gso: MPLS GSO support
Oct 11 03:37:00 localhost kernel: IPI shorthand broadcast: enabled
Oct 11 03:37:00 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 11 03:37:00 localhost kernel: AES CTR mode by8 optimization enabled
Oct 11 03:37:00 localhost kernel: sched_clock: Marking stable (1281018950, 144729080)->(1550586910, -124838880)
Oct 11 03:37:00 localhost kernel: registered taskstats version 1
Oct 11 03:37:00 localhost kernel: Loading compiled-in X.509 certificates
Oct 11 03:37:00 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 11 03:37:00 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 11 03:37:00 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 11 03:37:00 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 11 03:37:00 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 11 03:37:00 localhost kernel: Demotion targets for Node 0: null
Oct 11 03:37:00 localhost kernel: page_owner is disabled
Oct 11 03:37:00 localhost kernel: Key type .fscrypt registered
Oct 11 03:37:00 localhost kernel: Key type fscrypt-provisioning registered
Oct 11 03:37:00 localhost kernel: Key type big_key registered
Oct 11 03:37:00 localhost kernel: Key type encrypted registered
Oct 11 03:37:00 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 11 03:37:00 localhost kernel: Loading compiled-in module X.509 certificates
Oct 11 03:37:00 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 11 03:37:00 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 11 03:37:00 localhost kernel: ima: No architecture policies found
Oct 11 03:37:00 localhost kernel: evm: Initialising EVM extended attributes:
Oct 11 03:37:00 localhost kernel: evm: security.selinux
Oct 11 03:37:00 localhost kernel: evm: security.SMACK64 (disabled)
Oct 11 03:37:00 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 11 03:37:00 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 11 03:37:00 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 11 03:37:00 localhost kernel: evm: security.apparmor (disabled)
Oct 11 03:37:00 localhost kernel: evm: security.ima
Oct 11 03:37:00 localhost kernel: evm: security.capability
Oct 11 03:37:00 localhost kernel: evm: HMAC attrs: 0x1
Oct 11 03:37:00 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 11 03:37:00 localhost kernel: Running certificate verification RSA selftest
Oct 11 03:37:00 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 11 03:37:00 localhost kernel: Running certificate verification ECDSA selftest
Oct 11 03:37:00 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 11 03:37:00 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 11 03:37:00 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 11 03:37:00 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 11 03:37:00 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 11 03:37:00 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 11 03:37:00 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 11 03:37:00 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 11 03:37:00 localhost kernel: clk: Disabling unused clocks
Oct 11 03:37:00 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 11 03:37:00 localhost kernel: Freeing unused kernel image (initmem) memory: 4188K
Oct 11 03:37:00 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 11 03:37:00 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 472K
Oct 11 03:37:00 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 11 03:37:00 localhost kernel: Run /init as init process
Oct 11 03:37:00 localhost kernel:   with arguments:
Oct 11 03:37:00 localhost kernel:     /init
Oct 11 03:37:00 localhost kernel:   with environment:
Oct 11 03:37:00 localhost kernel:     HOME=/
Oct 11 03:37:00 localhost kernel:     TERM=linux
Oct 11 03:37:00 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64
Oct 11 03:37:00 localhost systemd[1]: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 11 03:37:00 localhost systemd[1]: Detected virtualization kvm.
Oct 11 03:37:00 localhost systemd[1]: Detected architecture x86-64.
Oct 11 03:37:00 localhost systemd[1]: Running in initrd.
Oct 11 03:37:00 localhost systemd[1]: No hostname configured, using default hostname.
Oct 11 03:37:00 localhost systemd[1]: Hostname set to <localhost>.
Oct 11 03:37:00 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 11 03:37:00 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 11 03:37:00 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 11 03:37:00 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 11 03:37:00 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 11 03:37:00 localhost systemd[1]: Reached target Local File Systems.
Oct 11 03:37:00 localhost systemd[1]: Reached target Path Units.
Oct 11 03:37:00 localhost systemd[1]: Reached target Slice Units.
Oct 11 03:37:00 localhost systemd[1]: Reached target Swaps.
Oct 11 03:37:00 localhost systemd[1]: Reached target Timer Units.
Oct 11 03:37:00 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 11 03:37:00 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 11 03:37:00 localhost systemd[1]: Listening on Journal Socket.
Oct 11 03:37:00 localhost systemd[1]: Listening on udev Control Socket.
Oct 11 03:37:00 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 11 03:37:00 localhost systemd[1]: Reached target Socket Units.
Oct 11 03:37:00 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 11 03:37:00 localhost systemd[1]: Starting Journal Service...
Oct 11 03:37:00 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 11 03:37:00 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 11 03:37:00 localhost systemd[1]: Starting Create System Users...
Oct 11 03:37:00 localhost systemd[1]: Starting Setup Virtual Console...
Oct 11 03:37:00 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 11 03:37:00 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 11 03:37:00 localhost systemd[1]: Finished Create System Users.
Oct 11 03:37:00 localhost systemd-journald[307]: Journal started
Oct 11 03:37:00 localhost systemd-journald[307]: Runtime Journal (/run/log/journal/53cb9e9d266844739499ec86a0f02be2) is 8.0M, max 153.6M, 145.6M free.
Oct 11 03:37:00 localhost systemd-sysusers[311]: Creating group 'users' with GID 100.
Oct 11 03:37:00 localhost systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Oct 11 03:37:00 localhost systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 11 03:37:00 localhost systemd[1]: Started Journal Service.
Oct 11 03:37:00 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 11 03:37:00 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 11 03:37:00 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 11 03:37:00 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 11 03:37:00 localhost systemd[1]: Finished Setup Virtual Console.
Oct 11 03:37:00 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 11 03:37:00 localhost systemd[1]: Starting dracut cmdline hook...
Oct 11 03:37:00 localhost dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Oct 11 03:37:00 localhost dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 11 03:37:00 localhost systemd[1]: Finished dracut cmdline hook.
Oct 11 03:37:00 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 11 03:37:00 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 11 03:37:00 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 11 03:37:00 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 11 03:37:00 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 11 03:37:00 localhost kernel: RPC: Registered udp transport module.
Oct 11 03:37:00 localhost kernel: RPC: Registered tcp transport module.
Oct 11 03:37:00 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 11 03:37:00 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 11 03:37:01 localhost rpc.statd[444]: Version 2.5.4 starting
Oct 11 03:37:01 localhost rpc.statd[444]: Initializing NSM state
Oct 11 03:37:01 localhost rpc.idmapd[449]: Setting log level to 0
Oct 11 03:37:01 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 11 03:37:01 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 11 03:37:01 localhost systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Oct 11 03:37:01 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 11 03:37:01 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 11 03:37:01 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 11 03:37:01 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 11 03:37:01 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 11 03:37:01 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 11 03:37:01 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 11 03:37:01 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 11 03:37:01 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 11 03:37:01 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 11 03:37:01 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 11 03:37:01 localhost systemd[1]: Reached target Network.
Oct 11 03:37:01 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 11 03:37:01 localhost systemd[1]: Starting dracut initqueue hook...
Oct 11 03:37:01 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 11 03:37:01 localhost systemd[1]: Reached target System Initialization.
Oct 11 03:37:01 localhost systemd[1]: Reached target Basic System.
Oct 11 03:37:01 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 11 03:37:01 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 11 03:37:01 localhost kernel:  vda: vda1
Oct 11 03:37:01 localhost kernel: libata version 3.00 loaded.
Oct 11 03:37:01 localhost systemd-udevd[489]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 03:37:01 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Oct 11 03:37:01 localhost kernel: scsi host0: ata_piix
Oct 11 03:37:01 localhost kernel: scsi host1: ata_piix
Oct 11 03:37:01 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 11 03:37:01 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 11 03:37:01 localhost systemd[1]: Found device /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 11 03:37:01 localhost systemd[1]: Reached target Initrd Root Device.
Oct 11 03:37:01 localhost kernel: ata1: found unknown device (class 0)
Oct 11 03:37:01 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 11 03:37:01 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 11 03:37:01 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 11 03:37:01 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 11 03:37:01 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 11 03:37:01 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 11 03:37:01 localhost systemd[1]: Finished dracut initqueue hook.
Oct 11 03:37:01 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 11 03:37:01 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 11 03:37:01 localhost systemd[1]: Reached target Remote File Systems.
Oct 11 03:37:01 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 11 03:37:01 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 11 03:37:01 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3...
Oct 11 03:37:01 localhost systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Oct 11 03:37:01 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 11 03:37:01 localhost systemd[1]: Mounting /sysroot...
Oct 11 03:37:02 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 11 03:37:02 localhost kernel: XFS (vda1): Mounting V5 Filesystem 9839e2e1-98a2-4594-b609-79d514deb0a3
Oct 11 03:37:02 localhost kernel: XFS (vda1): Ending clean mount
Oct 11 03:37:02 localhost systemd[1]: Mounted /sysroot.
Oct 11 03:37:02 localhost systemd[1]: Reached target Initrd Root File System.
Oct 11 03:37:02 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 11 03:37:02 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 11 03:37:02 localhost systemd[1]: Reached target Initrd File Systems.
Oct 11 03:37:02 localhost systemd[1]: Reached target Initrd Default Target.
Oct 11 03:37:02 localhost systemd[1]: Starting dracut mount hook...
Oct 11 03:37:02 localhost systemd[1]: Finished dracut mount hook.
Oct 11 03:37:02 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 11 03:37:02 localhost rpc.idmapd[449]: exiting on signal 15
Oct 11 03:37:02 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 11 03:37:02 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 11 03:37:02 localhost systemd[1]: Stopped target Network.
Oct 11 03:37:02 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 11 03:37:02 localhost systemd[1]: Stopped target Timer Units.
Oct 11 03:37:02 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 11 03:37:02 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 11 03:37:02 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 11 03:37:02 localhost systemd[1]: Stopped target Basic System.
Oct 11 03:37:02 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 11 03:37:02 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 11 03:37:02 localhost systemd[1]: Stopped target Path Units.
Oct 11 03:37:02 localhost systemd[1]: Stopped target Remote File Systems.
Oct 11 03:37:02 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 11 03:37:02 localhost systemd[1]: Stopped target Slice Units.
Oct 11 03:37:02 localhost systemd[1]: Stopped target Socket Units.
Oct 11 03:37:02 localhost systemd[1]: Stopped target System Initialization.
Oct 11 03:37:02 localhost systemd[1]: Stopped target Local File Systems.
Oct 11 03:37:02 localhost systemd[1]: Stopped target Swaps.
Oct 11 03:37:02 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Stopped dracut mount hook.
Oct 11 03:37:02 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 11 03:37:02 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 11 03:37:02 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 11 03:37:02 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 11 03:37:02 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 11 03:37:02 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 11 03:37:02 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 11 03:37:02 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 11 03:37:02 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 11 03:37:02 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 11 03:37:02 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 11 03:37:02 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 11 03:37:02 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Closed udev Control Socket.
Oct 11 03:37:02 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Closed udev Kernel Socket.
Oct 11 03:37:02 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 11 03:37:02 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 11 03:37:02 localhost systemd[1]: Starting Cleanup udev Database...
Oct 11 03:37:02 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 11 03:37:02 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 11 03:37:02 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Stopped Create System Users.
Oct 11 03:37:02 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 11 03:37:02 localhost systemd[1]: Finished Cleanup udev Database.
Oct 11 03:37:02 localhost systemd[1]: Reached target Switch Root.
Oct 11 03:37:02 localhost systemd[1]: Starting Switch Root...
Oct 11 03:37:02 localhost systemd[1]: Switching root.
Oct 11 03:37:02 localhost systemd-journald[307]: Journal stopped
Oct 11 03:37:03 localhost systemd-journald[307]: Received SIGTERM from PID 1 (systemd).
Oct 11 03:37:03 localhost kernel: audit: type=1404 audit(1760153822.788:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 11 03:37:03 localhost kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 03:37:03 localhost kernel: SELinux:  policy capability open_perms=1
Oct 11 03:37:03 localhost kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 03:37:03 localhost kernel: SELinux:  policy capability always_check_network=0
Oct 11 03:37:03 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 03:37:03 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 03:37:03 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 03:37:03 localhost kernel: audit: type=1403 audit(1760153822.920:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 11 03:37:03 localhost systemd[1]: Successfully loaded SELinux policy in 136.157ms.
Oct 11 03:37:03 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.617ms.
Oct 11 03:37:03 localhost systemd[1]: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 11 03:37:03 localhost systemd[1]: Detected virtualization kvm.
Oct 11 03:37:03 localhost systemd[1]: Detected architecture x86-64.
Oct 11 03:37:03 localhost systemd-rc-local-generator[640]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:37:03 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Oct 11 03:37:03 localhost systemd[1]: Stopped Switch Root.
Oct 11 03:37:03 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 11 03:37:03 localhost systemd[1]: Created slice Slice /system/getty.
Oct 11 03:37:03 localhost systemd[1]: Created slice Slice /system/serial-getty.
Oct 11 03:37:03 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Oct 11 03:37:03 localhost systemd[1]: Created slice User and Session Slice.
Oct 11 03:37:03 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 11 03:37:03 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Oct 11 03:37:03 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 11 03:37:03 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 11 03:37:03 localhost systemd[1]: Stopped target Switch Root.
Oct 11 03:37:03 localhost systemd[1]: Stopped target Initrd File Systems.
Oct 11 03:37:03 localhost systemd[1]: Stopped target Initrd Root File System.
Oct 11 03:37:03 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Oct 11 03:37:03 localhost systemd[1]: Reached target Path Units.
Oct 11 03:37:03 localhost systemd[1]: Reached target rpc_pipefs.target.
Oct 11 03:37:03 localhost systemd[1]: Reached target Slice Units.
Oct 11 03:37:03 localhost systemd[1]: Reached target Swaps.
Oct 11 03:37:03 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Oct 11 03:37:03 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Oct 11 03:37:03 localhost systemd[1]: Reached target RPC Port Mapper.
Oct 11 03:37:03 localhost systemd[1]: Listening on Process Core Dump Socket.
Oct 11 03:37:03 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Oct 11 03:37:03 localhost systemd[1]: Listening on udev Control Socket.
Oct 11 03:37:03 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 11 03:37:03 localhost systemd[1]: Mounting Huge Pages File System...
Oct 11 03:37:03 localhost systemd[1]: Mounting POSIX Message Queue File System...
Oct 11 03:37:03 localhost systemd[1]: Mounting Kernel Debug File System...
Oct 11 03:37:03 localhost systemd[1]: Mounting Kernel Trace File System...
Oct 11 03:37:03 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 11 03:37:03 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 11 03:37:03 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 11 03:37:03 localhost systemd[1]: Starting Load Kernel Module drm...
Oct 11 03:37:03 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Oct 11 03:37:03 localhost systemd[1]: Starting Load Kernel Module fuse...
Oct 11 03:37:03 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 11 03:37:03 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Oct 11 03:37:03 localhost systemd[1]: Stopped File System Check on Root Device.
Oct 11 03:37:03 localhost systemd[1]: Stopped Journal Service.
Oct 11 03:37:03 localhost systemd[1]: Starting Journal Service...
Oct 11 03:37:03 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 11 03:37:03 localhost systemd[1]: Starting Generate network units from Kernel command line...
Oct 11 03:37:03 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 11 03:37:03 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Oct 11 03:37:03 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 11 03:37:03 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 11 03:37:03 localhost kernel: fuse: init (API version 7.37)
Oct 11 03:37:03 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 11 03:37:03 localhost systemd[1]: Mounted Huge Pages File System.
Oct 11 03:37:03 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:03 localhost systemd[1]: Mounted POSIX Message Queue File System.
Oct 11 03:37:03 localhost systemd[1]: Mounted Kernel Debug File System.
Oct 11 03:37:03 localhost systemd[1]: Mounted Kernel Trace File System.
Oct 11 03:37:03 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 11 03:37:03 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 11 03:37:03 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 11 03:37:03 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 11 03:37:03 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 11 03:37:03 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 11 03:37:03 localhost systemd[1]: Finished Load Kernel Module fuse.
Oct 11 03:37:03 localhost systemd-journald[681]: Journal started
Oct 11 03:37:03 localhost systemd-journald[681]: Runtime Journal (/run/log/journal/a1727ec20198bc6caf436a6e13c4ff5e) is 8.0M, max 153.6M, 145.6M free.
Oct 11 03:37:03 localhost systemd[1]: Queued start job for default target Multi-User System.
Oct 11 03:37:03 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 11 03:37:03 localhost kernel: ACPI: bus type drm_connector registered
Oct 11 03:37:03 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 11 03:37:03 localhost systemd[1]: Started Journal Service.
Oct 11 03:37:03 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 11 03:37:03 localhost systemd[1]: Finished Load Kernel Module drm.
Oct 11 03:37:03 localhost systemd[1]: Finished Generate network units from Kernel command line.
Oct 11 03:37:03 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 11 03:37:03 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 11 03:37:03 localhost systemd[1]: Mounting FUSE Control File System...
Oct 11 03:37:03 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 11 03:37:03 localhost systemd[1]: Starting Rebuild Hardware Database...
Oct 11 03:37:03 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 11 03:37:03 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 11 03:37:03 localhost systemd[1]: Starting Load/Save OS Random Seed...
Oct 11 03:37:03 localhost systemd[1]: Starting Create System Users...
Oct 11 03:37:03 localhost systemd[1]: Mounted FUSE Control File System.
Oct 11 03:37:03 localhost systemd-journald[681]: Runtime Journal (/run/log/journal/a1727ec20198bc6caf436a6e13c4ff5e) is 8.0M, max 153.6M, 145.6M free.
Oct 11 03:37:03 localhost systemd-journald[681]: Received client request to flush runtime journal.
Oct 11 03:37:03 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 11 03:37:03 localhost systemd[1]: Finished Load/Save OS Random Seed.
Oct 11 03:37:03 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 11 03:37:03 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 11 03:37:03 localhost systemd[1]: Finished Create System Users.
Oct 11 03:37:03 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 11 03:37:03 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 11 03:37:03 localhost systemd[1]: Reached target Preparation for Local File Systems.
Oct 11 03:37:03 localhost systemd[1]: Reached target Local File Systems.
Oct 11 03:37:03 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct 11 03:37:03 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 11 03:37:03 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 11 03:37:03 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 11 03:37:03 localhost systemd[1]: Starting Automatic Boot Loader Update...
Oct 11 03:37:03 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 11 03:37:03 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 11 03:37:03 localhost bootctl[698]: Couldn't find EFI system partition, skipping.
Oct 11 03:37:03 localhost systemd[1]: Finished Automatic Boot Loader Update.
Oct 11 03:37:03 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 11 03:37:03 localhost systemd[1]: Starting Security Auditing Service...
Oct 11 03:37:03 localhost systemd[1]: Starting RPC Bind...
Oct 11 03:37:03 localhost systemd[1]: Starting Rebuild Journal Catalog...
Oct 11 03:37:03 localhost auditd[704]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 11 03:37:03 localhost auditd[704]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 11 03:37:03 localhost systemd[1]: Finished Rebuild Journal Catalog.
Oct 11 03:37:03 localhost systemd[1]: Started RPC Bind.
Oct 11 03:37:03 localhost augenrules[709]: /sbin/augenrules: No change
Oct 11 03:37:03 localhost augenrules[724]: No rules
Oct 11 03:37:03 localhost augenrules[724]: enabled 1
Oct 11 03:37:03 localhost augenrules[724]: failure 1
Oct 11 03:37:03 localhost augenrules[724]: pid 704
Oct 11 03:37:03 localhost augenrules[724]: rate_limit 0
Oct 11 03:37:03 localhost augenrules[724]: backlog_limit 8192
Oct 11 03:37:03 localhost augenrules[724]: lost 0
Oct 11 03:37:03 localhost augenrules[724]: backlog 0
Oct 11 03:37:03 localhost augenrules[724]: backlog_wait_time 60000
Oct 11 03:37:03 localhost augenrules[724]: backlog_wait_time_actual 0
Oct 11 03:37:03 localhost augenrules[724]: enabled 1
Oct 11 03:37:03 localhost augenrules[724]: failure 1
Oct 11 03:37:03 localhost augenrules[724]: pid 704
Oct 11 03:37:03 localhost augenrules[724]: rate_limit 0
Oct 11 03:37:03 localhost augenrules[724]: backlog_limit 8192
Oct 11 03:37:03 localhost augenrules[724]: lost 0
Oct 11 03:37:03 localhost augenrules[724]: backlog 0
Oct 11 03:37:03 localhost augenrules[724]: backlog_wait_time 60000
Oct 11 03:37:03 localhost augenrules[724]: backlog_wait_time_actual 0
Oct 11 03:37:03 localhost augenrules[724]: enabled 1
Oct 11 03:37:03 localhost augenrules[724]: failure 1
Oct 11 03:37:03 localhost augenrules[724]: pid 704
Oct 11 03:37:03 localhost augenrules[724]: rate_limit 0
Oct 11 03:37:03 localhost augenrules[724]: backlog_limit 8192
Oct 11 03:37:03 localhost augenrules[724]: lost 0
Oct 11 03:37:03 localhost augenrules[724]: backlog 0
Oct 11 03:37:03 localhost augenrules[724]: backlog_wait_time 60000
Oct 11 03:37:03 localhost augenrules[724]: backlog_wait_time_actual 0
Oct 11 03:37:03 localhost systemd[1]: Started Security Auditing Service.
Oct 11 03:37:03 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 11 03:37:03 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 11 03:37:04 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct 11 03:37:04 localhost systemd[1]: Finished Rebuild Hardware Database.
Oct 11 03:37:04 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 11 03:37:04 localhost systemd[1]: Starting Update is Completed...
Oct 11 03:37:04 localhost systemd[1]: Finished Update is Completed.
Oct 11 03:37:04 localhost systemd-udevd[732]: Using default interface naming scheme 'rhel-9.0'.
Oct 11 03:37:04 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 11 03:37:04 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 11 03:37:04 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 11 03:37:04 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 11 03:37:04 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 11 03:37:04 localhost systemd[1]: Reached target System Initialization.
Oct 11 03:37:04 localhost systemd[1]: Started dnf makecache --timer.
Oct 11 03:37:04 localhost systemd[1]: Started Daily rotation of log files.
Oct 11 03:37:04 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 11 03:37:04 localhost systemd[1]: Reached target Timer Units.
Oct 11 03:37:04 localhost systemd-udevd[744]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 03:37:04 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 11 03:37:04 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 11 03:37:04 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 11 03:37:04 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 11 03:37:04 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 11 03:37:04 localhost systemd[1]: Reached target Socket Units.
Oct 11 03:37:04 localhost systemd[1]: Starting D-Bus System Message Bus...
Oct 11 03:37:04 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 11 03:37:04 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 11 03:37:04 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 11 03:37:04 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 11 03:37:04 localhost kernel: Console: switching to colour dummy device 80x25
Oct 11 03:37:04 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 11 03:37:04 localhost kernel: [drm] features: -context_init
Oct 11 03:37:04 localhost kernel: [drm] number of scanouts: 1
Oct 11 03:37:04 localhost kernel: [drm] number of cap sets: 0
Oct 11 03:37:04 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct 11 03:37:04 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 11 03:37:04 localhost kernel: Console: switching to colour frame buffer device 128x48
Oct 11 03:37:04 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 11 03:37:04 localhost systemd[1]: Started D-Bus System Message Bus.
Oct 11 03:37:04 localhost systemd[1]: Reached target Basic System.
Oct 11 03:37:04 localhost dbus-broker-lau[777]: Ready
Oct 11 03:37:04 localhost systemd[1]: Starting NTP client/server...
Oct 11 03:37:04 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 11 03:37:04 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 11 03:37:04 localhost systemd[1]: Starting IPv4 firewall with iptables...
Oct 11 03:37:04 localhost systemd[1]: Started irqbalance daemon.
Oct 11 03:37:04 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 11 03:37:04 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 03:37:04 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 03:37:04 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 03:37:04 localhost systemd[1]: Reached target sshd-keygen.target.
Oct 11 03:37:04 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 11 03:37:04 localhost systemd[1]: Reached target User and Group Name Lookups.
Oct 11 03:37:04 localhost systemd[1]: Starting User Login Management...
Oct 11 03:37:04 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 11 03:37:04 localhost kernel: kvm_amd: TSC scaling supported
Oct 11 03:37:04 localhost kernel: kvm_amd: Nested Virtualization enabled
Oct 11 03:37:04 localhost kernel: kvm_amd: Nested Paging enabled
Oct 11 03:37:04 localhost kernel: kvm_amd: LBR virtualization supported
Oct 11 03:37:04 localhost chronyd[814]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 11 03:37:04 localhost chronyd[814]: Loaded 0 symmetric keys
Oct 11 03:37:04 localhost chronyd[814]: Using right/UTC timezone to obtain leap second data
Oct 11 03:37:04 localhost chronyd[814]: Loaded seccomp filter (level 2)
Oct 11 03:37:04 localhost systemd[1]: Started NTP client/server.
Oct 11 03:37:04 localhost systemd-logind[801]: New seat seat0.
Oct 11 03:37:04 localhost systemd-logind[801]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 11 03:37:04 localhost systemd-logind[801]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 11 03:37:04 localhost systemd[1]: Started User Login Management.
Oct 11 03:37:04 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 11 03:37:04 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 11 03:37:04 localhost iptables.init[796]: iptables: Applying firewall rules: [  OK  ]
Oct 11 03:37:04 localhost systemd[1]: Finished IPv4 firewall with iptables.
Oct 11 03:37:05 localhost cloud-init[840]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 11 Oct 2025 03:37:05 +0000. Up 7.05 seconds.
Oct 11 03:37:05 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Oct 11 03:37:05 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Oct 11 03:37:05 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpodwtac33.mount: Deactivated successfully.
Oct 11 03:37:05 localhost systemd[1]: Starting Hostname Service...
Oct 11 03:37:05 localhost systemd[1]: Started Hostname Service.
Oct 11 03:37:05 np0005480869.novalocal systemd-hostnamed[854]: Hostname set to <np0005480869.novalocal> (static)
Oct 11 03:37:05 np0005480869.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 11 03:37:05 np0005480869.novalocal systemd[1]: Reached target Preparation for Network.
Oct 11 03:37:05 np0005480869.novalocal systemd[1]: Starting Network Manager...
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.8542] NetworkManager (version 1.54.1-1.el9) is starting... (boot:c8da26f4-0310-49ac-b50e-f03e67e8ef1f)
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.8546] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.8686] manager[0x5583767af080]: monitoring kernel firmware directory '/lib/firmware'.
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.8725] hostname: hostname: using hostnamed
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.8725] hostname: static hostname changed from (none) to "np0005480869.novalocal"
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.8729] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.8855] manager[0x5583767af080]: rfkill: Wi-Fi hardware radio set enabled
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.8856] manager[0x5583767af080]: rfkill: WWAN hardware radio set enabled
Oct 11 03:37:05 np0005480869.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.8936] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.8937] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.8938] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.8938] manager: Networking is enabled by state file
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.8940] settings: Loaded settings plugin: keyfile (internal)
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.8972] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.8999] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9027] dhcp: init: Using DHCP client 'internal'
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9030] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9046] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9059] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9069] device (lo): Activation: starting connection 'lo' (070fa9fc-0387-45d5-a597-36836db223c7)
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9081] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9085] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9115] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9121] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 11 03:37:05 np0005480869.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9123] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9125] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9127] device (eth0): carrier: link connected
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9130] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9138] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9147] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9151] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9151] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9154] manager: NetworkManager state is now CONNECTING
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9155] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:37:05 np0005480869.novalocal systemd[1]: Started Network Manager.
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9163] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9166] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:37:05 np0005480869.novalocal systemd[1]: Reached target Network.
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9221] dhcp4 (eth0): state changed new lease, address=38.102.83.148
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9228] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 11 03:37:05 np0005480869.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9248] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 03:37:05 np0005480869.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 11 03:37:05 np0005480869.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9381] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9383] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9390] device (lo): Activation: successful, device activated.
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9398] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9400] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9403] manager: NetworkManager state is now CONNECTED_SITE
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9406] device (eth0): Activation: successful, device activated.
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9412] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 11 03:37:05 np0005480869.novalocal NetworkManager[858]: <info>  [1760153825.9415] manager: startup complete
Oct 11 03:37:05 np0005480869.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Oct 11 03:37:05 np0005480869.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 11 03:37:05 np0005480869.novalocal systemd[1]: Reached target NFS client services.
Oct 11 03:37:05 np0005480869.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Oct 11 03:37:05 np0005480869.novalocal systemd[1]: Reached target Remote File Systems.
Oct 11 03:37:05 np0005480869.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 11 03:37:05 np0005480869.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 11 03:37:05 np0005480869.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 11 Oct 2025 03:37:06 +0000. Up 8.02 seconds.
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: |  eth0  | True |        38.102.83.148         | 255.255.255.0 | global | fa:16:3e:5b:d3:f5 |
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fe5b:d3f5/64 |       .       |  link  | fa:16:3e:5b:d3:f5 |
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct 11 03:37:06 np0005480869.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 11 03:37:07 np0005480869.novalocal useradd[988]: new group: name=cloud-user, GID=1001
Oct 11 03:37:07 np0005480869.novalocal useradd[988]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Oct 11 03:37:07 np0005480869.novalocal useradd[988]: add 'cloud-user' to group 'adm'
Oct 11 03:37:07 np0005480869.novalocal useradd[988]: add 'cloud-user' to group 'systemd-journal'
Oct 11 03:37:07 np0005480869.novalocal useradd[988]: add 'cloud-user' to shadow group 'adm'
Oct 11 03:37:07 np0005480869.novalocal useradd[988]: add 'cloud-user' to shadow group 'systemd-journal'
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: Generating public/private rsa key pair.
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: The key fingerprint is:
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: SHA256:/JFO0FiWdox9vowkucvjzUyfi+jB8/GarJaPi9z7LWE root@np0005480869.novalocal
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: The key's randomart image is:
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: +---[RSA 3072]----+
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |          o=     |
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |         =+ + .  |
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |        o..o o   |
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |       . .o.. .  |
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |        S ++ o . |
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |         =..E o  |
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |         .*+o.   |
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |       . o*%.B . |
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |        o+XB%o*. |
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: +----[SHA256]-----+
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: Generating public/private ecdsa key pair.
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: The key fingerprint is:
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: SHA256:q3MnjbnzC3Nh4pAc3Ekbgsb50L1X1r93SgYJMJsGPQE root@np0005480869.novalocal
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: The key's randomart image is:
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: +---[ECDSA 256]---+
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |   . +E+*o   .   |
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |    *..=+B. o .  |
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |   . oo Bo + . . |
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |     ..+. . o   .|
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |      + S.o  .  .|
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |       o + .  o.o|
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |        =+.  o .o|
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |      ..*+o   .  |
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |      .o.*o.     |
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: +----[SHA256]-----+
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: Generating public/private ed25519 key pair.
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: The key fingerprint is:
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: SHA256:ISpmdj4K7EE9mxtmX3S9sbkdx2lT9l4ZWEIfVMrlz40 root@np0005480869.novalocal
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: The key's randomart image is:
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: +--[ED25519 256]--+
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |             ..o+|
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |            ...+.|
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |      . .    .oo.|
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |  .  . . ..   +oo|
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: | .=oo  .S. o .E.*|
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |o+ ++ . .   = ..*|
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |.o *o  .   + . Bo|
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: |..+.+..     o +.o|
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: | ... .     . .  .|
Oct 11 03:37:07 np0005480869.novalocal cloud-init[922]: +----[SHA256]-----+
Oct 11 03:37:07 np0005480869.novalocal sm-notify[1003]: Version 2.5.4 starting
Oct 11 03:37:07 np0005480869.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Oct 11 03:37:07 np0005480869.novalocal sshd[1006]: Server listening on 0.0.0.0 port 22.
Oct 11 03:37:07 np0005480869.novalocal systemd[1]: Reached target Cloud-config availability.
Oct 11 03:37:07 np0005480869.novalocal sshd[1006]: Server listening on :: port 22.
Oct 11 03:37:07 np0005480869.novalocal systemd[1]: Reached target Network is Online.
Oct 11 03:37:07 np0005480869.novalocal crond[1008]: (CRON) STARTUP (1.5.7)
Oct 11 03:37:07 np0005480869.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Oct 11 03:37:07 np0005480869.novalocal crond[1008]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 11 03:37:07 np0005480869.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Oct 11 03:37:07 np0005480869.novalocal crond[1008]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 63% if used.)
Oct 11 03:37:07 np0005480869.novalocal systemd[1]: Starting System Logging Service...
Oct 11 03:37:07 np0005480869.novalocal crond[1008]: (CRON) INFO (running with inotify support)
Oct 11 03:37:07 np0005480869.novalocal systemd[1]: Starting OpenSSH server daemon...
Oct 11 03:37:07 np0005480869.novalocal systemd[1]: Starting Permit User Sessions...
Oct 11 03:37:07 np0005480869.novalocal systemd[1]: Started Notify NFS peers of a restart.
Oct 11 03:37:07 np0005480869.novalocal systemd[1]: Finished Permit User Sessions.
Oct 11 03:37:07 np0005480869.novalocal systemd[1]: Started Command Scheduler.
Oct 11 03:37:07 np0005480869.novalocal systemd[1]: Started Getty on tty1.
Oct 11 03:37:07 np0005480869.novalocal systemd[1]: Started Serial Getty on ttyS0.
Oct 11 03:37:07 np0005480869.novalocal systemd[1]: Reached target Login Prompts.
Oct 11 03:37:07 np0005480869.novalocal systemd[1]: Started OpenSSH server daemon.
Oct 11 03:37:07 np0005480869.novalocal rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Oct 11 03:37:07 np0005480869.novalocal rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct 11 03:37:07 np0005480869.novalocal systemd[1]: Started System Logging Service.
Oct 11 03:37:07 np0005480869.novalocal systemd[1]: Reached target Multi-User System.
Oct 11 03:37:07 np0005480869.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 11 03:37:07 np0005480869.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 11 03:37:07 np0005480869.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 11 03:37:07 np0005480869.novalocal rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 03:37:08 np0005480869.novalocal cloud-init[1017]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 11 Oct 2025 03:37:07 +0000. Up 9.71 seconds.
Oct 11 03:37:08 np0005480869.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Oct 11 03:37:08 np0005480869.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Oct 11 03:37:08 np0005480869.novalocal cloud-init[1023]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 11 Oct 2025 03:37:08 +0000. Up 10.10 seconds.
Oct 11 03:37:08 np0005480869.novalocal sshd-session[1024]: Unable to negotiate with 38.102.83.114 port 53036: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Oct 11 03:37:08 np0005480869.novalocal sshd-session[1026]: Connection closed by 38.102.83.114 port 53038 [preauth]
Oct 11 03:37:08 np0005480869.novalocal sshd-session[1028]: Unable to negotiate with 38.102.83.114 port 53054: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Oct 11 03:37:08 np0005480869.novalocal cloud-init[1032]: #############################################################
Oct 11 03:37:08 np0005480869.novalocal cloud-init[1033]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct 11 03:37:08 np0005480869.novalocal sshd-session[1031]: Unable to negotiate with 38.102.83.114 port 53068: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Oct 11 03:37:08 np0005480869.novalocal cloud-init[1036]: 256 SHA256:q3MnjbnzC3Nh4pAc3Ekbgsb50L1X1r93SgYJMJsGPQE root@np0005480869.novalocal (ECDSA)
Oct 11 03:37:08 np0005480869.novalocal cloud-init[1039]: 256 SHA256:ISpmdj4K7EE9mxtmX3S9sbkdx2lT9l4ZWEIfVMrlz40 root@np0005480869.novalocal (ED25519)
Oct 11 03:37:08 np0005480869.novalocal cloud-init[1042]: 3072 SHA256:/JFO0FiWdox9vowkucvjzUyfi+jB8/GarJaPi9z7LWE root@np0005480869.novalocal (RSA)
Oct 11 03:37:08 np0005480869.novalocal cloud-init[1043]: -----END SSH HOST KEY FINGERPRINTS-----
Oct 11 03:37:08 np0005480869.novalocal cloud-init[1044]: #############################################################
Oct 11 03:37:08 np0005480869.novalocal sshd-session[1021]: Connection closed by 38.102.83.114 port 53026 [preauth]
Oct 11 03:37:08 np0005480869.novalocal sshd-session[1048]: Connection closed by 38.102.83.114 port 53086 [preauth]
Oct 11 03:37:08 np0005480869.novalocal sshd-session[1050]: Unable to negotiate with 38.102.83.114 port 53090: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Oct 11 03:37:08 np0005480869.novalocal cloud-init[1023]: Cloud-init v. 24.4-7.el9 finished at Sat, 11 Oct 2025 03:37:08 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.27 seconds
Oct 11 03:37:08 np0005480869.novalocal sshd-session[1052]: Unable to negotiate with 38.102.83.114 port 53096: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Oct 11 03:37:08 np0005480869.novalocal sshd-session[1037]: Connection closed by 38.102.83.114 port 53072 [preauth]
Oct 11 03:37:08 np0005480869.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Oct 11 03:37:08 np0005480869.novalocal systemd[1]: Reached target Cloud-init target.
Oct 11 03:37:08 np0005480869.novalocal systemd[1]: Startup finished in 1.764s (kernel) + 2.747s (initrd) + 5.839s (userspace) = 10.351s.
Oct 11 03:37:10 np0005480869.novalocal chronyd[814]: Selected source 167.160.187.12 (2.centos.pool.ntp.org)
Oct 11 03:37:10 np0005480869.novalocal chronyd[814]: System clock wrong by 1.486903 seconds
Oct 11 03:37:12 np0005480869.novalocal chronyd[814]: System clock was stepped by 1.486903 seconds
Oct 11 03:37:12 np0005480869.novalocal chronyd[814]: System clock TAI offset set to 37 seconds
Oct 11 03:37:16 np0005480869.novalocal irqbalance[797]: Cannot change IRQ 25 affinity: Operation not permitted
Oct 11 03:37:16 np0005480869.novalocal irqbalance[797]: IRQ 25 affinity is now unmanaged
Oct 11 03:37:16 np0005480869.novalocal irqbalance[797]: Cannot change IRQ 31 affinity: Operation not permitted
Oct 11 03:37:16 np0005480869.novalocal irqbalance[797]: IRQ 31 affinity is now unmanaged
Oct 11 03:37:16 np0005480869.novalocal irqbalance[797]: Cannot change IRQ 28 affinity: Operation not permitted
Oct 11 03:37:16 np0005480869.novalocal irqbalance[797]: IRQ 28 affinity is now unmanaged
Oct 11 03:37:16 np0005480869.novalocal irqbalance[797]: Cannot change IRQ 32 affinity: Operation not permitted
Oct 11 03:37:16 np0005480869.novalocal irqbalance[797]: IRQ 32 affinity is now unmanaged
Oct 11 03:37:16 np0005480869.novalocal irqbalance[797]: Cannot change IRQ 30 affinity: Operation not permitted
Oct 11 03:37:16 np0005480869.novalocal irqbalance[797]: IRQ 30 affinity is now unmanaged
Oct 11 03:37:16 np0005480869.novalocal irqbalance[797]: Cannot change IRQ 29 affinity: Operation not permitted
Oct 11 03:37:16 np0005480869.novalocal irqbalance[797]: IRQ 29 affinity is now unmanaged
Oct 11 03:37:17 np0005480869.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 03:37:37 np0005480869.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 11 03:38:18 np0005480869.novalocal chronyd[814]: Selected source 162.159.200.1 (2.centos.pool.ntp.org)
Oct 11 03:42:17 np0005480869.novalocal sshd-session[1057]: Connection closed by 167.94.138.171 port 59240 [preauth]
Oct 11 03:44:47 np0005480869.novalocal chronyd[814]: Selected source 167.160.187.12 (2.centos.pool.ntp.org)
Oct 11 03:51:18 np0005480869.novalocal sshd-session[1063]: Accepted publickey for zuul from 38.102.83.114 port 41358 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Oct 11 03:51:18 np0005480869.novalocal systemd[1]: Created slice User Slice of UID 1000.
Oct 11 03:51:18 np0005480869.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 11 03:51:18 np0005480869.novalocal systemd-logind[801]: New session 1 of user zuul.
Oct 11 03:51:18 np0005480869.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 11 03:51:18 np0005480869.novalocal systemd[1]: Starting User Manager for UID 1000...
Oct 11 03:51:18 np0005480869.novalocal systemd[1067]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:51:18 np0005480869.novalocal systemd[1067]: Queued start job for default target Main User Target.
Oct 11 03:51:18 np0005480869.novalocal systemd[1067]: Created slice User Application Slice.
Oct 11 03:51:18 np0005480869.novalocal systemd[1067]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 11 03:51:18 np0005480869.novalocal systemd[1067]: Started Daily Cleanup of User's Temporary Directories.
Oct 11 03:51:18 np0005480869.novalocal systemd[1067]: Reached target Paths.
Oct 11 03:51:18 np0005480869.novalocal systemd[1067]: Reached target Timers.
Oct 11 03:51:18 np0005480869.novalocal systemd[1067]: Starting D-Bus User Message Bus Socket...
Oct 11 03:51:18 np0005480869.novalocal systemd[1067]: Starting Create User's Volatile Files and Directories...
Oct 11 03:51:18 np0005480869.novalocal systemd[1067]: Listening on D-Bus User Message Bus Socket.
Oct 11 03:51:18 np0005480869.novalocal systemd[1067]: Reached target Sockets.
Oct 11 03:51:18 np0005480869.novalocal systemd[1067]: Finished Create User's Volatile Files and Directories.
Oct 11 03:51:18 np0005480869.novalocal systemd[1067]: Reached target Basic System.
Oct 11 03:51:18 np0005480869.novalocal systemd[1067]: Reached target Main User Target.
Oct 11 03:51:18 np0005480869.novalocal systemd[1067]: Startup finished in 170ms.
Oct 11 03:51:18 np0005480869.novalocal systemd[1]: Started User Manager for UID 1000.
Oct 11 03:51:18 np0005480869.novalocal systemd[1]: Started Session 1 of User zuul.
Oct 11 03:51:18 np0005480869.novalocal sshd-session[1063]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:51:19 np0005480869.novalocal python3[1151]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:51:21 np0005480869.novalocal python3[1179]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:51:27 np0005480869.novalocal python3[1237]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:51:28 np0005480869.novalocal python3[1277]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct 11 03:51:30 np0005480869.novalocal python3[1303]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKjPN/TELd85F16WgDGq+TpF1FLURTUuf5vV/zqD1qnnFSp9+0TTqVFiZr8DOr/ytXrM+VPxIoVzmwQfYHoumrqcuTxO3uKN2t+rr8sb8OH8aggtM9rheUvuvomXGGgn+XEa7NXWYZ169oUnghi2YWDqCfQA0wL3dmjDHq/n5TX3jxuVIlFjv8IzPKXzJFrtTxTMw2jPMAGiF1kHyHU3hhjOYjw5Bemnk8pPz/6uGAEaH8+nhCUDt1TUE5yRKFnt8ywIczAbGtarVrApGWnM9ognIkhs+8Swo2yj2vbBonOvc+lIhyJhas3tiHnH65fvZ9K213qSIra8Q99o11+cZg/PwqzGfcVT3nCxKFL3VjVf6Sv3rA117jzaavTzSwF2v6Qc2x3kmu9h6MIlAvqLwdnrWWyRrqVrKWGwSZYSbCLGEowANM729nwMTAflZ4eA52Ey647Hz/hwKeLBK0BKQzYTRd3Hmvo5aPSENXXVNrt6jTEjjrMuXI4Qm6jbZSQ7U= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:31 np0005480869.novalocal python3[1327]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:31 np0005480869.novalocal python3[1426]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:51:32 np0005480869.novalocal python3[1497]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760154691.4545732-207-49021530114227/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=67fd7ecff9e54a749bf6c898d987ce3b_id_rsa follow=False checksum=24354a15fab0893c4e3009206cccddba4d0b07ad backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:32 np0005480869.novalocal python3[1620]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:51:33 np0005480869.novalocal python3[1691]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760154692.440924-240-225282559568131/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=67fd7ecff9e54a749bf6c898d987ce3b_id_rsa.pub follow=False checksum=d849e6a5fd5e5f0a18491614bbb8f47989e905cb backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:34 np0005480869.novalocal python3[1739]: ansible-ping Invoked with data=pong
Oct 11 03:51:35 np0005480869.novalocal python3[1763]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:51:37 np0005480869.novalocal python3[1821]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct 11 03:51:38 np0005480869.novalocal python3[1853]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:38 np0005480869.novalocal python3[1877]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:38 np0005480869.novalocal python3[1901]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:38 np0005480869.novalocal python3[1925]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:39 np0005480869.novalocal python3[1949]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:39 np0005480869.novalocal python3[1973]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:40 np0005480869.novalocal sudo[1997]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfanirizqfyscfvfxunnwldtxwzjfhey ; /usr/bin/python3'
Oct 11 03:51:40 np0005480869.novalocal sudo[1997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:40 np0005480869.novalocal python3[1999]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:40 np0005480869.novalocal sudo[1997]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:41 np0005480869.novalocal sudo[2075]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahcwaykfboeewxmrduhzjxlpfabybiyh ; /usr/bin/python3'
Oct 11 03:51:41 np0005480869.novalocal sudo[2075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:41 np0005480869.novalocal python3[2077]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:51:41 np0005480869.novalocal sudo[2075]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:41 np0005480869.novalocal sudo[2148]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vylgsebzykvxatvnulofiqfbnwiowfkn ; /usr/bin/python3'
Oct 11 03:51:41 np0005480869.novalocal sudo[2148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:41 np0005480869.novalocal python3[2150]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760154701.0968063-21-241126281873803/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:41 np0005480869.novalocal sudo[2148]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:42 np0005480869.novalocal python3[2198]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:42 np0005480869.novalocal python3[2222]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:43 np0005480869.novalocal python3[2246]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:43 np0005480869.novalocal python3[2270]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:43 np0005480869.novalocal python3[2294]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:43 np0005480869.novalocal python3[2318]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:44 np0005480869.novalocal python3[2342]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:44 np0005480869.novalocal python3[2366]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:44 np0005480869.novalocal python3[2390]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:45 np0005480869.novalocal python3[2414]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:45 np0005480869.novalocal python3[2438]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:45 np0005480869.novalocal python3[2462]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:45 np0005480869.novalocal python3[2486]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:46 np0005480869.novalocal python3[2510]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:46 np0005480869.novalocal python3[2534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:46 np0005480869.novalocal python3[2558]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:46 np0005480869.novalocal python3[2582]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:47 np0005480869.novalocal python3[2606]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:47 np0005480869.novalocal python3[2630]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:47 np0005480869.novalocal python3[2654]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:48 np0005480869.novalocal python3[2678]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:48 np0005480869.novalocal python3[2702]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:48 np0005480869.novalocal python3[2726]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:48 np0005480869.novalocal python3[2750]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:49 np0005480869.novalocal python3[2774]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:49 np0005480869.novalocal python3[2798]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:51:52 np0005480869.novalocal sudo[2822]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvlfjeayimgysgiepjkhqylafchuxjfs ; /usr/bin/python3'
Oct 11 03:51:52 np0005480869.novalocal sudo[2822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:52 np0005480869.novalocal python3[2824]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 11 03:51:52 np0005480869.novalocal systemd[1]: Starting Time & Date Service...
Oct 11 03:51:52 np0005480869.novalocal systemd[1]: Started Time & Date Service.
Oct 11 03:51:52 np0005480869.novalocal systemd-timedated[2826]: Changed time zone to 'UTC' (UTC).
Oct 11 03:51:52 np0005480869.novalocal sudo[2822]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:52 np0005480869.novalocal sudo[2853]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlohonaszyftugednmbatopdcdhahtoq ; /usr/bin/python3'
Oct 11 03:51:52 np0005480869.novalocal sudo[2853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:52 np0005480869.novalocal python3[2855]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:52 np0005480869.novalocal sudo[2853]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:53 np0005480869.novalocal python3[2931]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:51:53 np0005480869.novalocal python3[3002]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1760154713.1901321-153-122301252394087/source _original_basename=tmpdu7wfwhg follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:54 np0005480869.novalocal python3[3102]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:51:54 np0005480869.novalocal python3[3173]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760154714.052002-183-255103033003187/source _original_basename=tmpwfu0t0ni follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:55 np0005480869.novalocal sudo[3273]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqigmyoinhmznovwzufmhzcesrfdidow ; /usr/bin/python3'
Oct 11 03:51:55 np0005480869.novalocal sudo[3273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:55 np0005480869.novalocal python3[3275]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:51:55 np0005480869.novalocal sudo[3273]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:55 np0005480869.novalocal sudo[3346]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keokhueaegkpdjrgvfanxdsvelsyksxl ; /usr/bin/python3'
Oct 11 03:51:55 np0005480869.novalocal sudo[3346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:55 np0005480869.novalocal python3[3348]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760154715.108078-231-93869994373312/source _original_basename=tmp5oj7ee4a follow=False checksum=a6c024a6649a87ca7709e2430139c248a6eabb0e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:55 np0005480869.novalocal sudo[3346]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:56 np0005480869.novalocal python3[3396]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:51:56 np0005480869.novalocal python3[3422]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:51:56 np0005480869.novalocal irqbalance[797]: Cannot change IRQ 27 affinity: Operation not permitted
Oct 11 03:51:56 np0005480869.novalocal irqbalance[797]: IRQ 27 affinity is now unmanaged
Oct 11 03:51:57 np0005480869.novalocal sudo[3500]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcushgldweeuodlakszmrgivlvbnkzic ; /usr/bin/python3'
Oct 11 03:51:57 np0005480869.novalocal sudo[3500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:57 np0005480869.novalocal python3[3502]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:51:57 np0005480869.novalocal sudo[3500]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:57 np0005480869.novalocal sudo[3573]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtyciopzzjczppqexthzxbkuxsfxpfry ; /usr/bin/python3'
Oct 11 03:51:57 np0005480869.novalocal sudo[3573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:57 np0005480869.novalocal python3[3575]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1760154716.8970995-273-271099211262656/source _original_basename=tmpipdhngbq follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:57 np0005480869.novalocal sudo[3573]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:58 np0005480869.novalocal sudo[3624]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpmqndxjqetjwxdoflowmaqenmpwkmsg ; /usr/bin/python3'
Oct 11 03:51:58 np0005480869.novalocal sudo[3624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:58 np0005480869.novalocal python3[3626]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-ef0f-fb3f-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:51:58 np0005480869.novalocal sudo[3624]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:58 np0005480869.novalocal python3[3654]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-ef0f-fb3f-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct 11 03:52:00 np0005480869.novalocal python3[3683]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:52:00 np0005480869.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Oct 11 03:52:00 np0005480869.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 11 03:52:00 np0005480869.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Oct 11 03:52:00 np0005480869.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 11 03:52:18 np0005480869.novalocal sudo[3709]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siijacnczcwylmlwynetlzlsuziggcuy ; /usr/bin/python3'
Oct 11 03:52:18 np0005480869.novalocal sudo[3709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:18 np0005480869.novalocal python3[3711]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:52:18 np0005480869.novalocal sudo[3709]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:22 np0005480869.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 11 03:52:50 np0005480869.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 11 03:52:50 np0005480869.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct 11 03:52:50 np0005480869.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct 11 03:52:50 np0005480869.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct 11 03:52:50 np0005480869.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct 11 03:52:50 np0005480869.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct 11 03:52:50 np0005480869.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct 11 03:52:50 np0005480869.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct 11 03:52:50 np0005480869.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct 11 03:52:50 np0005480869.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct 11 03:52:50 np0005480869.novalocal NetworkManager[858]: <info>  [1760154770.8958] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 11 03:52:50 np0005480869.novalocal systemd-udevd[3714]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 03:52:50 np0005480869.novalocal NetworkManager[858]: <info>  [1760154770.9115] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 03:52:50 np0005480869.novalocal NetworkManager[858]: <info>  [1760154770.9140] settings: (eth1): created default wired connection 'Wired connection 1'
Oct 11 03:52:50 np0005480869.novalocal NetworkManager[858]: <info>  [1760154770.9143] device (eth1): carrier: link connected
Oct 11 03:52:50 np0005480869.novalocal NetworkManager[858]: <info>  [1760154770.9145] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 11 03:52:50 np0005480869.novalocal NetworkManager[858]: <info>  [1760154770.9150] policy: auto-activating connection 'Wired connection 1' (a3cf2323-8650-312c-87b8-0254e3de1e75)
Oct 11 03:52:50 np0005480869.novalocal NetworkManager[858]: <info>  [1760154770.9154] device (eth1): Activation: starting connection 'Wired connection 1' (a3cf2323-8650-312c-87b8-0254e3de1e75)
Oct 11 03:52:50 np0005480869.novalocal NetworkManager[858]: <info>  [1760154770.9155] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:52:50 np0005480869.novalocal NetworkManager[858]: <info>  [1760154770.9157] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:52:50 np0005480869.novalocal NetworkManager[858]: <info>  [1760154770.9160] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:52:50 np0005480869.novalocal NetworkManager[858]: <info>  [1760154770.9166] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:52:51 np0005480869.novalocal python3[3741]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-48ec-6bc8-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:53:01 np0005480869.novalocal sudo[3819]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujmehjzvwwlanrmigvopnsosdlprtzre ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 11 03:53:01 np0005480869.novalocal sudo[3819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:02 np0005480869.novalocal python3[3821]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:53:02 np0005480869.novalocal sudo[3819]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:02 np0005480869.novalocal sudo[3892]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yczrczslynlfrsivyaddbifklsuefkui ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 11 03:53:02 np0005480869.novalocal sudo[3892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:02 np0005480869.novalocal python3[3894]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760154781.629132-102-120091892992135/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=5a8e949e70b670737f626664d7bdea627bcc92a8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:02 np0005480869.novalocal sudo[3892]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:02 np0005480869.novalocal sudo[3942]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-affzxtgdncwipqxpnhksclkznsijqgff ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 11 03:53:02 np0005480869.novalocal sudo[3942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:03 np0005480869.novalocal python3[3944]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 03:53:03 np0005480869.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 11 03:53:03 np0005480869.novalocal systemd[1]: Stopped Network Manager Wait Online.
Oct 11 03:53:03 np0005480869.novalocal systemd[1]: Stopping Network Manager Wait Online...
Oct 11 03:53:03 np0005480869.novalocal systemd[1]: Stopping Network Manager...
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[858]: <info>  [1760154783.1981] caught SIGTERM, shutting down normally.
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[858]: <info>  [1760154783.1995] dhcp4 (eth0): canceled DHCP transaction
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[858]: <info>  [1760154783.1995] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[858]: <info>  [1760154783.1995] dhcp4 (eth0): state changed no lease
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[858]: <info>  [1760154783.1998] manager: NetworkManager state is now CONNECTING
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[858]: <info>  [1760154783.2144] dhcp4 (eth1): canceled DHCP transaction
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[858]: <info>  [1760154783.2145] dhcp4 (eth1): state changed no lease
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[858]: <info>  [1760154783.2196] exiting (success)
Oct 11 03:53:03 np0005480869.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 03:53:03 np0005480869.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 03:53:03 np0005480869.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 11 03:53:03 np0005480869.novalocal systemd[1]: Stopped Network Manager.
Oct 11 03:53:03 np0005480869.novalocal systemd[1]: NetworkManager.service: Consumed 5.783s CPU time, 10.0M memory peak.
Oct 11 03:53:03 np0005480869.novalocal systemd[1]: Starting Network Manager...
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3064] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:c8da26f4-0310-49ac-b50e-f03e67e8ef1f)
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3067] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3129] manager[0x5616164c7070]: monitoring kernel firmware directory '/lib/firmware'.
Oct 11 03:53:03 np0005480869.novalocal systemd[1]: Starting Hostname Service...
Oct 11 03:53:03 np0005480869.novalocal systemd[1]: Started Hostname Service.
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3801] hostname: hostname: using hostnamed
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3802] hostname: static hostname changed from (none) to "np0005480869.novalocal"
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3809] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3815] manager[0x5616164c7070]: rfkill: Wi-Fi hardware radio set enabled
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3816] manager[0x5616164c7070]: rfkill: WWAN hardware radio set enabled
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3862] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3863] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3864] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3865] manager: Networking is enabled by state file
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3869] settings: Loaded settings plugin: keyfile (internal)
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3876] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3916] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3929] dhcp: init: Using DHCP client 'internal'
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3934] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3942] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3951] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3964] device (lo): Activation: starting connection 'lo' (070fa9fc-0387-45d5-a597-36836db223c7)
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3975] device (eth0): carrier: link connected
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3982] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3990] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.3991] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4000] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4011] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4021] device (eth1): carrier: link connected
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4029] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4037] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (a3cf2323-8650-312c-87b8-0254e3de1e75) (indicated)
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4037] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4046] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4057] device (eth1): Activation: starting connection 'Wired connection 1' (a3cf2323-8650-312c-87b8-0254e3de1e75)
Oct 11 03:53:03 np0005480869.novalocal systemd[1]: Started Network Manager.
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4066] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4074] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4079] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4082] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4086] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4106] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4109] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4112] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4115] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4121] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4123] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4131] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4133] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4150] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4151] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4155] device (lo): Activation: successful, device activated.
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4180] dhcp4 (eth0): state changed new lease, address=38.102.83.148
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4185] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4252] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 11 03:53:03 np0005480869.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4293] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4295] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4297] manager: NetworkManager state is now CONNECTED_SITE
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4302] device (eth0): Activation: successful, device activated.
Oct 11 03:53:03 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154783.4306] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 11 03:53:03 np0005480869.novalocal sudo[3942]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:03 np0005480869.novalocal python3[4029]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-48ec-6bc8-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:53:13 np0005480869.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 03:53:33 np0005480869.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 11 03:53:46 np0005480869.novalocal systemd[1067]: Starting Mark boot as successful...
Oct 11 03:53:46 np0005480869.novalocal systemd[1067]: Finished Mark boot as successful.
Oct 11 03:53:48 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154828.7516] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 11 03:53:48 np0005480869.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 03:53:48 np0005480869.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 03:53:48 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154828.7884] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 11 03:53:48 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154828.7887] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 11 03:53:48 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154828.7897] device (eth1): Activation: successful, device activated.
Oct 11 03:53:48 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154828.7905] manager: startup complete
Oct 11 03:53:48 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154828.7907] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct 11 03:53:48 np0005480869.novalocal NetworkManager[3958]: <warn>  [1760154828.7912] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct 11 03:53:48 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154828.7920] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct 11 03:53:48 np0005480869.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 11 03:53:48 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154828.8003] dhcp4 (eth1): canceled DHCP transaction
Oct 11 03:53:48 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154828.8004] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:53:48 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154828.8004] dhcp4 (eth1): state changed no lease
Oct 11 03:53:48 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154828.8017] policy: auto-activating connection 'ci-private-network' (f8b03ef6-0026-501a-9e18-7db335cb7a5f)
Oct 11 03:53:48 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154828.8022] device (eth1): Activation: starting connection 'ci-private-network' (f8b03ef6-0026-501a-9e18-7db335cb7a5f)
Oct 11 03:53:48 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154828.8023] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:53:48 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154828.8026] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:53:48 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154828.8032] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:53:48 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154828.8042] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 03:53:48 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154828.8088] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 03:53:48 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154828.8089] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 03:53:48 np0005480869.novalocal NetworkManager[3958]: <info>  [1760154828.8095] device (eth1): Activation: successful, device activated.
Oct 11 03:53:58 np0005480869.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 03:54:03 np0005480869.novalocal sshd-session[1078]: Received disconnect from 38.102.83.114 port 41358:11: disconnected by user
Oct 11 03:54:03 np0005480869.novalocal sshd-session[1078]: Disconnected from user zuul 38.102.83.114 port 41358
Oct 11 03:54:03 np0005480869.novalocal sshd-session[1063]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:54:03 np0005480869.novalocal systemd-logind[801]: Session 1 logged out. Waiting for processes to exit.
Oct 11 03:54:04 np0005480869.novalocal sshd-session[4061]: Accepted publickey for zuul from 38.102.83.114 port 55772 ssh2: RSA SHA256:dWlyUpO/mrqMp/eYKItgj5U2Mvcj3EJpSdFVtC0zru4
Oct 11 03:54:04 np0005480869.novalocal systemd-logind[801]: New session 3 of user zuul.
Oct 11 03:54:04 np0005480869.novalocal systemd[1]: Started Session 3 of User zuul.
Oct 11 03:54:04 np0005480869.novalocal sshd-session[4061]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:54:04 np0005480869.novalocal sudo[4140]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bixbmdhksfvucgsqcqjxvslpdwequbhg ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 11 03:54:04 np0005480869.novalocal sudo[4140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:05 np0005480869.novalocal python3[4142]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:54:05 np0005480869.novalocal sudo[4140]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:05 np0005480869.novalocal sudo[4213]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxcjxcunpicvqczyexxlyfgvqocbwykb ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 11 03:54:05 np0005480869.novalocal sudo[4213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:05 np0005480869.novalocal python3[4215]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760154844.7615676-267-256956223325896/source _original_basename=tmphyslxvt6 follow=False checksum=e302c2ee8ae687a3e4cdd3444b1853192519f469 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:05 np0005480869.novalocal sudo[4213]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:07 np0005480869.novalocal sshd-session[4064]: Connection closed by 38.102.83.114 port 55772
Oct 11 03:54:07 np0005480869.novalocal sshd-session[4061]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:54:07 np0005480869.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Oct 11 03:54:07 np0005480869.novalocal systemd-logind[801]: Session 3 logged out. Waiting for processes to exit.
Oct 11 03:54:07 np0005480869.novalocal systemd-logind[801]: Removed session 3.
Oct 11 03:56:35 np0005480869.novalocal sshd-session[4241]: error: kex_exchange_identification: read: Connection reset by peer
Oct 11 03:56:35 np0005480869.novalocal sshd-session[4241]: Connection reset by 45.140.17.97 port 6949
Oct 11 03:56:46 np0005480869.novalocal systemd[1067]: Created slice User Background Tasks Slice.
Oct 11 03:56:46 np0005480869.novalocal systemd[1067]: Starting Cleanup of User's Temporary Files and Directories...
Oct 11 03:56:46 np0005480869.novalocal systemd[1067]: Finished Cleanup of User's Temporary Files and Directories.
Oct 11 03:59:14 np0005480869.novalocal sshd-session[4246]: Accepted publickey for zuul from 38.102.83.114 port 51142 ssh2: RSA SHA256:dWlyUpO/mrqMp/eYKItgj5U2Mvcj3EJpSdFVtC0zru4
Oct 11 03:59:14 np0005480869.novalocal systemd-logind[801]: New session 4 of user zuul.
Oct 11 03:59:14 np0005480869.novalocal systemd[1]: Started Session 4 of User zuul.
Oct 11 03:59:14 np0005480869.novalocal sshd-session[4246]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:59:14 np0005480869.novalocal sudo[4273]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpuimsjjjfydqtnqpmtlrhlmlnzcizit ; /usr/bin/python3'
Oct 11 03:59:14 np0005480869.novalocal sudo[4273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:59:14 np0005480869.novalocal python3[4275]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-6c57-6c2d-000000001ce2-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:59:14 np0005480869.novalocal sudo[4273]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:15 np0005480869.novalocal sudo[4302]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnlcmkpjhaykakzmygoccuhmkvilnwgx ; /usr/bin/python3'
Oct 11 03:59:15 np0005480869.novalocal sudo[4302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:59:15 np0005480869.novalocal python3[4304]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:59:15 np0005480869.novalocal sudo[4302]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:15 np0005480869.novalocal sudo[4328]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhcswvxgpnfitlatcllnxpuxneyiyzgd ; /usr/bin/python3'
Oct 11 03:59:15 np0005480869.novalocal sudo[4328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:59:15 np0005480869.novalocal python3[4330]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:59:15 np0005480869.novalocal sudo[4328]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:15 np0005480869.novalocal sudo[4354]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzuhhihrlwameyxilxhzbbblnfcnyvdj ; /usr/bin/python3'
Oct 11 03:59:15 np0005480869.novalocal sudo[4354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:59:15 np0005480869.novalocal python3[4356]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:59:15 np0005480869.novalocal sudo[4354]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:15 np0005480869.novalocal sudo[4380]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdfgcvxazsiggannqgqsxlmbaokmokud ; /usr/bin/python3'
Oct 11 03:59:15 np0005480869.novalocal sudo[4380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:59:16 np0005480869.novalocal python3[4382]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:59:16 np0005480869.novalocal sudo[4380]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:16 np0005480869.novalocal sudo[4406]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlkhmxlihuhudevfurlulbggbcqakcqo ; /usr/bin/python3'
Oct 11 03:59:16 np0005480869.novalocal sudo[4406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:59:16 np0005480869.novalocal python3[4408]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:59:16 np0005480869.novalocal python3[4408]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct 11 03:59:16 np0005480869.novalocal sudo[4406]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:16 np0005480869.novalocal irqbalance[797]: Cannot change IRQ 26 affinity: Operation not permitted
Oct 11 03:59:16 np0005480869.novalocal irqbalance[797]: IRQ 26 affinity is now unmanaged
Oct 11 03:59:17 np0005480869.novalocal sudo[4432]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyeqafqlgdwntjugukfxjuwtginiqzuq ; /usr/bin/python3'
Oct 11 03:59:17 np0005480869.novalocal sudo[4432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:59:17 np0005480869.novalocal python3[4434]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 03:59:17 np0005480869.novalocal systemd[1]: Reloading.
Oct 11 03:59:17 np0005480869.novalocal systemd-rc-local-generator[4452]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:59:17 np0005480869.novalocal systemd[1]: Starting dnf makecache...
Oct 11 03:59:17 np0005480869.novalocal sudo[4432]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:18 np0005480869.novalocal dnf[4465]: Failed determining last makecache time.
Oct 11 03:59:18 np0005480869.novalocal dnf[4465]: CentOS Stream 9 - BaseOS                         40 kB/s | 6.7 kB     00:00
Oct 11 03:59:18 np0005480869.novalocal dnf[4465]: CentOS Stream 9 - AppStream                      60 kB/s | 6.8 kB     00:00
Oct 11 03:59:19 np0005480869.novalocal dnf[4465]: CentOS Stream 9 - CRB                            69 kB/s | 6.6 kB     00:00
Oct 11 03:59:19 np0005480869.novalocal sudo[4495]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlrywpugbtmmjtxhyhqbdpkwdcwpfxbv ; /usr/bin/python3'
Oct 11 03:59:19 np0005480869.novalocal sudo[4495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:59:19 np0005480869.novalocal dnf[4465]: CentOS Stream 9 - Extras packages                83 kB/s | 8.0 kB     00:00
Oct 11 03:59:19 np0005480869.novalocal python3[4498]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct 11 03:59:19 np0005480869.novalocal sudo[4495]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:19 np0005480869.novalocal dnf[4465]: Metadata cache created.
Oct 11 03:59:19 np0005480869.novalocal systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 11 03:59:19 np0005480869.novalocal systemd[1]: Finished dnf makecache.
Oct 11 03:59:19 np0005480869.novalocal sudo[4523]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjiyctzqkytcwemifoigpwlkmkttsxqm ; /usr/bin/python3'
Oct 11 03:59:19 np0005480869.novalocal sudo[4523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:59:19 np0005480869.novalocal python3[4525]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:59:19 np0005480869.novalocal sudo[4523]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:19 np0005480869.novalocal sudo[4551]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyigyagsurujzhpcqwqespieesplbwbd ; /usr/bin/python3'
Oct 11 03:59:19 np0005480869.novalocal sudo[4551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:59:20 np0005480869.novalocal python3[4553]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:59:20 np0005480869.novalocal sudo[4551]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:20 np0005480869.novalocal sudo[4579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdpuvictxwhsijpqmlgvtucncaclvlib ; /usr/bin/python3'
Oct 11 03:59:20 np0005480869.novalocal sudo[4579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:59:20 np0005480869.novalocal python3[4581]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:59:20 np0005480869.novalocal sudo[4579]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:20 np0005480869.novalocal sudo[4607]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvdezfxotxzlspjfmyqvjgyboecfdtca ; /usr/bin/python3'
Oct 11 03:59:20 np0005480869.novalocal sudo[4607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:59:20 np0005480869.novalocal python3[4609]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:59:20 np0005480869.novalocal sudo[4607]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:21 np0005480869.novalocal python3[4636]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-6c57-6c2d-000000001ce8-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:59:21 np0005480869.novalocal python3[4666]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:59:23 np0005480869.novalocal sshd-session[4249]: Connection closed by 38.102.83.114 port 51142
Oct 11 03:59:23 np0005480869.novalocal sshd-session[4246]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:59:23 np0005480869.novalocal systemd-logind[801]: Session 4 logged out. Waiting for processes to exit.
Oct 11 03:59:23 np0005480869.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Oct 11 03:59:23 np0005480869.novalocal systemd[1]: session-4.scope: Consumed 3.411s CPU time.
Oct 11 03:59:23 np0005480869.novalocal systemd-logind[801]: Removed session 4.
Oct 11 03:59:25 np0005480869.novalocal sshd-session[4672]: Accepted publickey for zuul from 38.102.83.114 port 52728 ssh2: RSA SHA256:dWlyUpO/mrqMp/eYKItgj5U2Mvcj3EJpSdFVtC0zru4
Oct 11 03:59:25 np0005480869.novalocal systemd-logind[801]: New session 5 of user zuul.
Oct 11 03:59:25 np0005480869.novalocal systemd[1]: Started Session 5 of User zuul.
Oct 11 03:59:25 np0005480869.novalocal sshd-session[4672]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:59:25 np0005480869.novalocal sudo[4699]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjzalhjhyhxmehojrnrlrhjytpjubxue ; /usr/bin/python3'
Oct 11 03:59:25 np0005480869.novalocal sudo[4699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:59:25 np0005480869.novalocal python3[4701]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 11 03:59:39 np0005480869.novalocal kernel: SELinux:  Converting 364 SID table entries...
Oct 11 03:59:39 np0005480869.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 03:59:39 np0005480869.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 11 03:59:39 np0005480869.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 03:59:39 np0005480869.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 11 03:59:39 np0005480869.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 03:59:39 np0005480869.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 03:59:39 np0005480869.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 03:59:48 np0005480869.novalocal kernel: SELinux:  Converting 364 SID table entries...
Oct 11 03:59:48 np0005480869.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 03:59:48 np0005480869.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 11 03:59:48 np0005480869.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 03:59:48 np0005480869.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 11 03:59:48 np0005480869.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 03:59:48 np0005480869.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 03:59:48 np0005480869.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 03:59:56 np0005480869.novalocal kernel: SELinux:  Converting 364 SID table entries...
Oct 11 03:59:56 np0005480869.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 03:59:56 np0005480869.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 11 03:59:56 np0005480869.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 03:59:56 np0005480869.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 11 03:59:56 np0005480869.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 03:59:56 np0005480869.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 03:59:56 np0005480869.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 03:59:58 np0005480869.novalocal setsebool[4764]: The virt_use_nfs policy boolean was changed to 1 by root
Oct 11 03:59:58 np0005480869.novalocal setsebool[4764]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct 11 04:00:08 np0005480869.novalocal kernel: SELinux:  Converting 367 SID table entries...
Oct 11 04:00:08 np0005480869.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 04:00:08 np0005480869.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 11 04:00:08 np0005480869.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 04:00:08 np0005480869.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 11 04:00:08 np0005480869.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 04:00:08 np0005480869.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 04:00:08 np0005480869.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 04:00:26 np0005480869.novalocal dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 11 04:00:26 np0005480869.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 04:00:26 np0005480869.novalocal systemd[1]: Starting man-db-cache-update.service...
Oct 11 04:00:26 np0005480869.novalocal systemd[1]: Reloading.
Oct 11 04:00:26 np0005480869.novalocal systemd-rc-local-generator[5520]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:00:26 np0005480869.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 04:00:27 np0005480869.novalocal systemd[1]: Starting PackageKit Daemon...
Oct 11 04:00:27 np0005480869.novalocal PackageKit[6087]: daemon start
Oct 11 04:00:27 np0005480869.novalocal systemd[1]: Starting Authorization Manager...
Oct 11 04:00:27 np0005480869.novalocal polkitd[6176]: Started polkitd version 0.117
Oct 11 04:00:27 np0005480869.novalocal polkitd[6176]: Loading rules from directory /etc/polkit-1/rules.d
Oct 11 04:00:27 np0005480869.novalocal polkitd[6176]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 11 04:00:27 np0005480869.novalocal polkitd[6176]: Finished loading, compiling and executing 3 rules
Oct 11 04:00:27 np0005480869.novalocal systemd[1]: Started Authorization Manager.
Oct 11 04:00:27 np0005480869.novalocal polkitd[6176]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Oct 11 04:00:27 np0005480869.novalocal systemd[1]: Started PackageKit Daemon.
Oct 11 04:00:28 np0005480869.novalocal sudo[4699]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:34 np0005480869.novalocal python3[10550]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ec2-ffbe-0885-8d39-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:00:35 np0005480869.novalocal kernel: evm: overlay not supported
Oct 11 04:00:35 np0005480869.novalocal systemd[1067]: Starting D-Bus User Message Bus...
Oct 11 04:00:35 np0005480869.novalocal dbus-broker-launch[10950]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct 11 04:00:35 np0005480869.novalocal dbus-broker-launch[10950]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct 11 04:00:35 np0005480869.novalocal systemd[1067]: Started D-Bus User Message Bus.
Oct 11 04:00:35 np0005480869.novalocal dbus-broker-lau[10950]: Ready
Oct 11 04:00:35 np0005480869.novalocal systemd[1067]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 11 04:00:35 np0005480869.novalocal systemd[1067]: Created slice Slice /user.
Oct 11 04:00:35 np0005480869.novalocal systemd[1067]: podman-10865.scope: unit configures an IP firewall, but not running as root.
Oct 11 04:00:35 np0005480869.novalocal systemd[1067]: (This warning is only shown for the first unit using IP firewalling.)
Oct 11 04:00:35 np0005480869.novalocal systemd[1067]: Started podman-10865.scope.
Oct 11 04:00:35 np0005480869.novalocal systemd[1067]: Started podman-pause-c069ad24.scope.
Oct 11 04:00:36 np0005480869.novalocal sudo[11285]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akhszmixrnrobembhqfqnhtfpuwvkinb ; /usr/bin/python3'
Oct 11 04:00:36 np0005480869.novalocal sudo[11285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:00:36 np0005480869.novalocal python3[11302]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.66:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.66:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:00:36 np0005480869.novalocal sudo[11285]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:36 np0005480869.novalocal sshd-session[4675]: Connection closed by 38.102.83.114 port 52728
Oct 11 04:00:36 np0005480869.novalocal sshd-session[4672]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:00:36 np0005480869.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Oct 11 04:00:36 np0005480869.novalocal systemd[1]: session-5.scope: Consumed 58.131s CPU time.
Oct 11 04:00:36 np0005480869.novalocal systemd-logind[801]: Session 5 logged out. Waiting for processes to exit.
Oct 11 04:00:36 np0005480869.novalocal systemd-logind[801]: Removed session 5.
Oct 11 04:00:55 np0005480869.novalocal sshd-session[18503]: Unable to negotiate with 38.102.83.192 port 52514: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 11 04:00:55 np0005480869.novalocal sshd-session[18506]: Connection closed by 38.102.83.192 port 52504 [preauth]
Oct 11 04:00:55 np0005480869.novalocal sshd-session[18505]: Unable to negotiate with 38.102.83.192 port 52528: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 11 04:00:55 np0005480869.novalocal sshd-session[18501]: Unable to negotiate with 38.102.83.192 port 52540: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 11 04:00:55 np0005480869.novalocal sshd-session[18507]: Connection closed by 38.102.83.192 port 52512 [preauth]
Oct 11 04:01:00 np0005480869.novalocal sshd-session[20032]: Accepted publickey for zuul from 38.102.83.114 port 47722 ssh2: RSA SHA256:dWlyUpO/mrqMp/eYKItgj5U2Mvcj3EJpSdFVtC0zru4
Oct 11 04:01:00 np0005480869.novalocal systemd-logind[801]: New session 6 of user zuul.
Oct 11 04:01:00 np0005480869.novalocal systemd[1]: Started Session 6 of User zuul.
Oct 11 04:01:00 np0005480869.novalocal sshd-session[20032]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:01:00 np0005480869.novalocal python3[20134]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNmK3pK5SAYcnv2b30XExb6M0TjnUd0PEYX87oxRlCTTymx7VE8+CdiK754qG9nRrAtL/eTfyZbOCflhWdqacTQ= zuul@np0005480868.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 04:01:00 np0005480869.novalocal sudo[20300]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppppeooveueafwsssvsbbclcezobjiih ; /usr/bin/python3'
Oct 11 04:01:00 np0005480869.novalocal sudo[20300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:01:00 np0005480869.novalocal python3[20311]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNmK3pK5SAYcnv2b30XExb6M0TjnUd0PEYX87oxRlCTTymx7VE8+CdiK754qG9nRrAtL/eTfyZbOCflhWdqacTQ= zuul@np0005480868.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 04:01:00 np0005480869.novalocal sudo[20300]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:01 np0005480869.novalocal CROND[20465]: (root) CMD (run-parts /etc/cron.hourly)
Oct 11 04:01:01 np0005480869.novalocal run-parts[20470]: (/etc/cron.hourly) starting 0anacron
Oct 11 04:01:01 np0005480869.novalocal anacron[20485]: Anacron started on 2025-10-11
Oct 11 04:01:01 np0005480869.novalocal anacron[20485]: Will run job `cron.daily' in 16 min.
Oct 11 04:01:01 np0005480869.novalocal anacron[20485]: Will run job `cron.weekly' in 36 min.
Oct 11 04:01:01 np0005480869.novalocal anacron[20485]: Will run job `cron.monthly' in 56 min.
Oct 11 04:01:01 np0005480869.novalocal anacron[20485]: Jobs will be executed sequentially
Oct 11 04:01:01 np0005480869.novalocal run-parts[20488]: (/etc/cron.hourly) finished 0anacron
Oct 11 04:01:01 np0005480869.novalocal CROND[20461]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 11 04:01:01 np0005480869.novalocal sudo[20643]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmysvbuqhqcigvfkcrwspgulunpxdlco ; /usr/bin/python3'
Oct 11 04:01:01 np0005480869.novalocal sudo[20643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:01:01 np0005480869.novalocal python3[20653]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005480869.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct 11 04:01:01 np0005480869.novalocal useradd[20712]: new group: name=cloud-admin, GID=1002
Oct 11 04:01:01 np0005480869.novalocal useradd[20712]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Oct 11 04:01:01 np0005480869.novalocal sudo[20643]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:02 np0005480869.novalocal sudo[20864]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfiburnrirqzmzjyprogwhwbevugvbxg ; /usr/bin/python3'
Oct 11 04:01:02 np0005480869.novalocal sudo[20864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:01:02 np0005480869.novalocal python3[20873]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNmK3pK5SAYcnv2b30XExb6M0TjnUd0PEYX87oxRlCTTymx7VE8+CdiK754qG9nRrAtL/eTfyZbOCflhWdqacTQ= zuul@np0005480868.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 04:01:02 np0005480869.novalocal sudo[20864]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:02 np0005480869.novalocal sudo[21142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyyxayuajjnnegxlbnvcfsczsakmjlac ; /usr/bin/python3'
Oct 11 04:01:02 np0005480869.novalocal sudo[21142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:01:02 np0005480869.novalocal python3[21152]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:01:02 np0005480869.novalocal sudo[21142]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:03 np0005480869.novalocal sudo[21402]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eovdwmcjxplarxenvxflftbhrudowqhi ; /usr/bin/python3'
Oct 11 04:01:03 np0005480869.novalocal sudo[21402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:01:03 np0005480869.novalocal python3[21416]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760155262.4973464-135-37269944862657/source _original_basename=tmprmrjuv09 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:01:03 np0005480869.novalocal sudo[21402]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:03 np0005480869.novalocal sudo[21739]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ammknbpfvyffuxvnbzdtaksnmrfnuotv ; /usr/bin/python3'
Oct 11 04:01:03 np0005480869.novalocal sudo[21739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:01:04 np0005480869.novalocal python3[21745]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct 11 04:01:04 np0005480869.novalocal systemd[1]: Starting Hostname Service...
Oct 11 04:01:04 np0005480869.novalocal systemd[1]: Started Hostname Service.
Oct 11 04:01:04 np0005480869.novalocal systemd-hostnamed[21842]: Changed pretty hostname to 'compute-0'
Oct 11 04:01:04 compute-0 systemd-hostnamed[21842]: Hostname set to <compute-0> (static)
Oct 11 04:01:04 compute-0 NetworkManager[3958]: <info>  [1760155264.2904] hostname: static hostname changed from "np0005480869.novalocal" to "compute-0"
Oct 11 04:01:04 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 04:01:04 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 04:01:04 compute-0 sudo[21739]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:04 compute-0 sshd-session[20075]: Connection closed by 38.102.83.114 port 47722
Oct 11 04:01:04 compute-0 sshd-session[20032]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:01:04 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Oct 11 04:01:04 compute-0 systemd[1]: session-6.scope: Consumed 2.389s CPU time.
Oct 11 04:01:04 compute-0 systemd-logind[801]: Session 6 logged out. Waiting for processes to exit.
Oct 11 04:01:04 compute-0 systemd-logind[801]: Removed session 6.
Oct 11 04:01:14 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 04:01:18 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 04:01:18 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 11 04:01:18 compute-0 systemd[1]: man-db-cache-update.service: Consumed 58.688s CPU time.
Oct 11 04:01:18 compute-0 systemd[1]: run-r174e500f6c7e4715872818c86a9495e4.service: Deactivated successfully.
Oct 11 04:01:34 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 11 04:04:37 compute-0 sshd-session[26580]: Accepted publickey for zuul from 38.102.83.192 port 52036 ssh2: RSA SHA256:dWlyUpO/mrqMp/eYKItgj5U2Mvcj3EJpSdFVtC0zru4
Oct 11 04:04:37 compute-0 systemd-logind[801]: New session 7 of user zuul.
Oct 11 04:04:37 compute-0 systemd[1]: Started Session 7 of User zuul.
Oct 11 04:04:37 compute-0 sshd-session[26580]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:04:38 compute-0 python3[26656]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:04:39 compute-0 sudo[26770]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvzowurbyyfobldytogwksdwojzdtqkt ; /usr/bin/python3'
Oct 11 04:04:39 compute-0 sudo[26770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:04:39 compute-0 python3[26772]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:04:39 compute-0 sudo[26770]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:40 compute-0 sudo[26843]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-graloksygzmmjozykvtiuhpwfjlscnyh ; /usr/bin/python3'
Oct 11 04:04:40 compute-0 sudo[26843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:04:40 compute-0 python3[26845]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760155479.6350095-30191-142297358451373/source mode=0755 _original_basename=delorean.repo follow=False checksum=f3fabc627b4c59ab3d10213193ffdeeed080e354 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:04:40 compute-0 sudo[26843]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:40 compute-0 sudo[26869]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdvjvottvraeumrbnzuexfgpmzxwperb ; /usr/bin/python3'
Oct 11 04:04:40 compute-0 sudo[26869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:04:40 compute-0 python3[26871]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:04:40 compute-0 sudo[26869]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:40 compute-0 sudo[26942]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onvkcupwoxlzbdbxirxrbavdvgtrldat ; /usr/bin/python3'
Oct 11 04:04:40 compute-0 sudo[26942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:04:40 compute-0 python3[26944]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760155479.6350095-30191-142297358451373/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:04:40 compute-0 sudo[26942]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:41 compute-0 sudo[26968]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfmavhuygwxdwjttjynbdrdgxywhnygi ; /usr/bin/python3'
Oct 11 04:04:41 compute-0 sudo[26968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:04:41 compute-0 python3[26970]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:04:41 compute-0 sudo[26968]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:41 compute-0 sudo[27041]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpmgnsdsrxajocvxpulatuslqekfksyv ; /usr/bin/python3'
Oct 11 04:04:41 compute-0 sudo[27041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:04:41 compute-0 python3[27043]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760155479.6350095-30191-142297358451373/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:04:41 compute-0 sudo[27041]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:41 compute-0 sudo[27067]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxbnoyugqbvyrexyxoolzkwkexdrijwh ; /usr/bin/python3'
Oct 11 04:04:41 compute-0 sudo[27067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:04:41 compute-0 python3[27069]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:04:41 compute-0 sudo[27067]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:42 compute-0 sudo[27140]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riyjbivcxkatxhuyltkufslznvmhwpeg ; /usr/bin/python3'
Oct 11 04:04:42 compute-0 sudo[27140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:04:42 compute-0 python3[27142]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760155479.6350095-30191-142297358451373/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:04:42 compute-0 sudo[27140]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:42 compute-0 sudo[27166]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iynqyodzwelckwytjpsaqthdjdayhrvg ; /usr/bin/python3'
Oct 11 04:04:42 compute-0 sudo[27166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:04:42 compute-0 python3[27168]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:04:42 compute-0 sudo[27166]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:42 compute-0 sudo[27239]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kejuetqgeabrktmawdvkdtebdsgwwfmy ; /usr/bin/python3'
Oct 11 04:04:42 compute-0 sudo[27239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:04:42 compute-0 python3[27241]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760155479.6350095-30191-142297358451373/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:04:42 compute-0 sudo[27239]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:43 compute-0 sudo[27265]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgmdyepadyltvnnkwouwyurejxsemhli ; /usr/bin/python3'
Oct 11 04:04:43 compute-0 sudo[27265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:04:43 compute-0 python3[27267]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:04:43 compute-0 sudo[27265]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:43 compute-0 sudo[27338]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppkzjjflghwixpabudhpuqizkcevximz ; /usr/bin/python3'
Oct 11 04:04:43 compute-0 sudo[27338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:04:43 compute-0 python3[27340]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760155479.6350095-30191-142297358451373/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:04:43 compute-0 sudo[27338]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:43 compute-0 sudo[27364]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufucykzuvsdblljbhwoxtzxbyfsrhuta ; /usr/bin/python3'
Oct 11 04:04:43 compute-0 sudo[27364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:04:43 compute-0 python3[27366]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:04:43 compute-0 sudo[27364]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:43 compute-0 sudo[27437]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usblevdzszbdcjtiymdlozvdybypfeuh ; /usr/bin/python3'
Oct 11 04:04:43 compute-0 sudo[27437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:04:44 compute-0 python3[27439]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760155479.6350095-30191-142297358451373/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=5e44558a2b46929660a6b5bfc8824fb4521580a4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:04:44 compute-0 sudo[27437]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:46 compute-0 sshd-session[27464]: Unable to negotiate with 192.168.122.11 port 51642: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 11 04:04:46 compute-0 sshd-session[27466]: Unable to negotiate with 192.168.122.11 port 51652: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 11 04:04:46 compute-0 sshd-session[27465]: Unable to negotiate with 192.168.122.11 port 51662: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 11 04:04:46 compute-0 sshd-session[27469]: Connection closed by 192.168.122.11 port 51612 [preauth]
Oct 11 04:04:46 compute-0 sshd-session[27468]: Connection closed by 192.168.122.11 port 51628 [preauth]
Oct 11 04:04:55 compute-0 python3[27497]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:05:32 compute-0 PackageKit[6087]: daemon quit
Oct 11 04:05:32 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 11 04:09:54 compute-0 sshd-session[26583]: Received disconnect from 38.102.83.192 port 52036:11: disconnected by user
Oct 11 04:09:54 compute-0 sshd-session[26583]: Disconnected from user zuul 38.102.83.192 port 52036
Oct 11 04:09:54 compute-0 sshd-session[26580]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:09:54 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Oct 11 04:09:54 compute-0 systemd[1]: session-7.scope: Consumed 5.092s CPU time.
Oct 11 04:09:54 compute-0 systemd-logind[801]: Session 7 logged out. Waiting for processes to exit.
Oct 11 04:09:54 compute-0 systemd-logind[801]: Removed session 7.
Oct 11 04:15:53 compute-0 sshd-session[27506]: Accepted publickey for zuul from 192.168.122.30 port 50148 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:15:53 compute-0 systemd-logind[801]: New session 8 of user zuul.
Oct 11 04:15:53 compute-0 systemd[1]: Started Session 8 of User zuul.
Oct 11 04:15:53 compute-0 sshd-session[27506]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:15:54 compute-0 python3.9[27659]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:15:55 compute-0 sudo[27838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdglotcrwxzkolxrfaarqrwginalidpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156155.3064744-32-245599159548578/AnsiballZ_command.py'
Oct 11 04:15:55 compute-0 sudo[27838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:15:56 compute-0 python3.9[27840]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:16:03 compute-0 sudo[27838]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:03 compute-0 sshd-session[27509]: Connection closed by 192.168.122.30 port 50148
Oct 11 04:16:03 compute-0 sshd-session[27506]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:16:03 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Oct 11 04:16:03 compute-0 systemd[1]: session-8.scope: Consumed 8.078s CPU time.
Oct 11 04:16:03 compute-0 systemd-logind[801]: Session 8 logged out. Waiting for processes to exit.
Oct 11 04:16:03 compute-0 systemd-logind[801]: Removed session 8.
Oct 11 04:16:19 compute-0 sshd-session[27897]: Accepted publickey for zuul from 192.168.122.30 port 51498 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:16:19 compute-0 systemd-logind[801]: New session 9 of user zuul.
Oct 11 04:16:19 compute-0 systemd[1]: Started Session 9 of User zuul.
Oct 11 04:16:19 compute-0 sshd-session[27897]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:16:20 compute-0 python3.9[28050]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 11 04:16:21 compute-0 python3.9[28224]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:16:21 compute-0 sudo[28374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leobudzkieuqktywdtvlndxcaprvewbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156181.4804397-45-241816697848754/AnsiballZ_command.py'
Oct 11 04:16:21 compute-0 sudo[28374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:16:22 compute-0 python3.9[28376]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:16:22 compute-0 sudo[28374]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:22 compute-0 sudo[28527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnqxmwssxenimdyqyprhqjmsbnekgxke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156182.4396334-57-235652059764464/AnsiballZ_stat.py'
Oct 11 04:16:22 compute-0 sudo[28527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:16:23 compute-0 python3.9[28529]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:16:23 compute-0 sudo[28527]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:23 compute-0 sudo[28679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvxbqtvcmtrxpajqyqkbkpmsskxzblkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156183.3294077-65-209291645411258/AnsiballZ_file.py'
Oct 11 04:16:23 compute-0 sudo[28679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:16:23 compute-0 python3.9[28681]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:16:24 compute-0 sudo[28679]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:24 compute-0 sudo[28831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onuhjvggooffphkhjokdvjivdjxvmtzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156184.1450033-73-207379108047981/AnsiballZ_stat.py'
Oct 11 04:16:24 compute-0 sudo[28831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:16:24 compute-0 python3.9[28833]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:16:24 compute-0 sudo[28831]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:25 compute-0 sudo[28954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmfvjnvwxiyudreabaxgqyrbdkfpfpby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156184.1450033-73-207379108047981/AnsiballZ_copy.py'
Oct 11 04:16:25 compute-0 sudo[28954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:16:25 compute-0 python3.9[28956]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760156184.1450033-73-207379108047981/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:16:25 compute-0 sudo[28954]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:25 compute-0 sudo[29106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtgjthguppbcrdwcxqjaakapinphhcee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156185.564347-88-68017671735749/AnsiballZ_setup.py'
Oct 11 04:16:25 compute-0 sudo[29106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:16:26 compute-0 python3.9[29108]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:16:26 compute-0 sudo[29106]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:26 compute-0 sudo[29262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dswlwszqgtlewqxqzmxvsqnqctozvmyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156186.5890427-96-230341006963961/AnsiballZ_file.py'
Oct 11 04:16:26 compute-0 sudo[29262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:16:27 compute-0 python3.9[29264]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:16:27 compute-0 sudo[29262]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:28 compute-0 python3.9[29414]: ansible-ansible.builtin.service_facts Invoked
Oct 11 04:16:34 compute-0 python3.9[29669]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:16:34 compute-0 python3.9[29819]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:16:36 compute-0 python3.9[29973]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:16:37 compute-0 sudo[30129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibtiqsbcnxygszhvgbptpwxgomkbgmlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156196.6790767-144-235100820500051/AnsiballZ_setup.py'
Oct 11 04:16:37 compute-0 sudo[30129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:16:37 compute-0 python3.9[30131]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:16:37 compute-0 sudo[30129]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:38 compute-0 sudo[30213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rddddrzvjfkxofpkjtsjhytdbtdmkire ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156196.6790767-144-235100820500051/AnsiballZ_dnf.py'
Oct 11 04:16:38 compute-0 sudo[30213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:16:38 compute-0 python3.9[30215]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:17:01 compute-0 anacron[20485]: Job `cron.daily' started
Oct 11 04:17:01 compute-0 anacron[20485]: Job `cron.daily' terminated
Oct 11 04:17:20 compute-0 systemd[1]: Reloading.
Oct 11 04:17:20 compute-0 systemd-rc-local-generator[30415]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:17:20 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 11 04:17:21 compute-0 systemd[1]: Reloading.
Oct 11 04:17:21 compute-0 systemd-rc-local-generator[30455]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:17:21 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 11 04:17:21 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 11 04:17:21 compute-0 systemd[1]: Reloading.
Oct 11 04:17:21 compute-0 systemd-rc-local-generator[30495]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:17:21 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 11 04:17:21 compute-0 dbus-broker-launch[777]: Noticed file-system modification, trigger reload.
Oct 11 04:17:22 compute-0 dbus-broker-launch[777]: Noticed file-system modification, trigger reload.
Oct 11 04:18:22 compute-0 kernel: SELinux:  Converting 2714 SID table entries...
Oct 11 04:18:22 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 04:18:22 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 11 04:18:22 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 04:18:22 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 11 04:18:22 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 04:18:22 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 04:18:22 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 04:18:22 compute-0 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct 11 04:18:22 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 04:18:22 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 11 04:18:22 compute-0 systemd[1]: Reloading.
Oct 11 04:18:22 compute-0 systemd-rc-local-generator[30804]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:18:22 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 04:18:23 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 11 04:18:23 compute-0 PackageKit[31018]: daemon start
Oct 11 04:18:23 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 11 04:18:23 compute-0 sudo[30213]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:23 compute-0 sudo[31720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqxjjwbdnwabavovnvophedjvdrzfwtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156303.5187526-156-218275302781422/AnsiballZ_command.py'
Oct 11 04:18:23 compute-0 sudo[31720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:23 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 04:18:23 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 11 04:18:23 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.478s CPU time.
Oct 11 04:18:23 compute-0 systemd[1]: run-r80246e118f2c4f10a52bb5e2f0e0fd80.service: Deactivated successfully.
Oct 11 04:18:24 compute-0 python3.9[31723]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:18:25 compute-0 sudo[31720]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:26 compute-0 sudo[32002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kozjtyfjpvcwxfxbidxauekjlhcsdmio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156305.4672418-164-6418488171780/AnsiballZ_selinux.py'
Oct 11 04:18:26 compute-0 sudo[32002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:26 compute-0 python3.9[32004]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 11 04:18:26 compute-0 sudo[32002]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:27 compute-0 sudo[32154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcwszmnjpztoqautleugmnnweppowoje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156307.1237297-175-71817088650363/AnsiballZ_command.py'
Oct 11 04:18:27 compute-0 sudo[32154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:27 compute-0 python3.9[32156]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 11 04:18:28 compute-0 sudo[32154]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:29 compute-0 sudo[32307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjwvelyqodccqgzjemshsfykuklnrnxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156308.7400405-183-121431522651961/AnsiballZ_file.py'
Oct 11 04:18:29 compute-0 sudo[32307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:29 compute-0 python3.9[32309]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:29 compute-0 sudo[32307]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:30 compute-0 sudo[32459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxilunxvtgiktqquszbjnmdairdxnddr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156309.8909142-191-188048584129136/AnsiballZ_mount.py'
Oct 11 04:18:30 compute-0 sudo[32459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:30 compute-0 python3.9[32461]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 11 04:18:30 compute-0 sudo[32459]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:31 compute-0 sudo[32611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pljzjoijcallicychceidxppxgmnwrlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156311.3608618-219-196160070902089/AnsiballZ_file.py'
Oct 11 04:18:31 compute-0 sudo[32611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:31 compute-0 python3.9[32613]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:18:31 compute-0 sudo[32611]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:32 compute-0 sudo[32763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhhpjasotmxmuldliqdqhkpyaflghuql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156312.1541705-227-212972169187376/AnsiballZ_stat.py'
Oct 11 04:18:32 compute-0 sudo[32763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:32 compute-0 python3.9[32765]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:32 compute-0 sudo[32763]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:33 compute-0 sudo[32886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ourzkggfiylobakkkisygudpxhcyrppy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156312.1541705-227-212972169187376/AnsiballZ_copy.py'
Oct 11 04:18:33 compute-0 sudo[32886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:33 compute-0 python3.9[32888]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760156312.1541705-227-212972169187376/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=520cbc50dc10709fc8e2e65b75dc4f8582e9dfa8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:33 compute-0 sudo[32886]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:34 compute-0 sudo[33038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjjcqcrvybfzmgpamjxpechgdfpkoltf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156313.8832686-254-229623934613098/AnsiballZ_getent.py'
Oct 11 04:18:34 compute-0 sudo[33038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:36 compute-0 python3.9[33040]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 11 04:18:36 compute-0 sudo[33038]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:37 compute-0 sudo[33191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-preiataxafrpgnlpxxwseokpqpmeyjix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156316.9877875-262-66618907488946/AnsiballZ_group.py'
Oct 11 04:18:37 compute-0 sudo[33191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:37 compute-0 python3.9[33193]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 11 04:18:37 compute-0 groupadd[33194]: group added to /etc/group: name=qemu, GID=107
Oct 11 04:18:37 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:18:37 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:18:37 compute-0 groupadd[33194]: group added to /etc/gshadow: name=qemu
Oct 11 04:18:37 compute-0 groupadd[33194]: new group: name=qemu, GID=107
Oct 11 04:18:37 compute-0 sudo[33191]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:38 compute-0 sudo[33350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fltjleizewtitvxzpnsnbpdqpgvktada ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156318.0141504-270-127477528605221/AnsiballZ_user.py'
Oct 11 04:18:38 compute-0 sudo[33350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:38 compute-0 python3.9[33352]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 11 04:18:38 compute-0 useradd[33354]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Oct 11 04:18:38 compute-0 sudo[33350]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:39 compute-0 sudo[33510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtcdcxsptiqvnejkdrwbwtiayrpofzfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156318.9700274-278-31882716725592/AnsiballZ_getent.py'
Oct 11 04:18:39 compute-0 sudo[33510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:39 compute-0 python3.9[33512]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 11 04:18:39 compute-0 sudo[33510]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:39 compute-0 sudo[33663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsbahkwuglesavdhccemilwprcwamwuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156319.695162-286-39120141121642/AnsiballZ_group.py'
Oct 11 04:18:39 compute-0 sudo[33663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:40 compute-0 python3.9[33665]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 11 04:18:40 compute-0 groupadd[33666]: group added to /etc/group: name=hugetlbfs, GID=42477
Oct 11 04:18:40 compute-0 groupadd[33666]: group added to /etc/gshadow: name=hugetlbfs
Oct 11 04:18:40 compute-0 groupadd[33666]: new group: name=hugetlbfs, GID=42477
Oct 11 04:18:40 compute-0 sudo[33663]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:40 compute-0 sudo[33821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgcxnfigrstdkswdptthnkutqvflrxqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156320.4506311-295-225252088160178/AnsiballZ_file.py'
Oct 11 04:18:40 compute-0 sudo[33821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:40 compute-0 python3.9[33823]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 11 04:18:41 compute-0 sudo[33821]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:41 compute-0 sudo[33973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwgjvomnbbffbxulcrlidbsqudkjecav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156321.3884437-306-237684469714347/AnsiballZ_dnf.py'
Oct 11 04:18:41 compute-0 sudo[33973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:41 compute-0 python3.9[33975]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:18:43 compute-0 sudo[33973]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:43 compute-0 sudo[34127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwgmkdfdznvagnoxdsjznyrewdgcyshk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156323.5396042-314-45099633778060/AnsiballZ_file.py'
Oct 11 04:18:43 compute-0 sudo[34127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:44 compute-0 python3.9[34129]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:18:44 compute-0 sudo[34127]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:44 compute-0 sudo[34279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xamlgujydzpsmzgkqicdcszzablfjovj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156324.2729025-322-119315401446941/AnsiballZ_stat.py'
Oct 11 04:18:44 compute-0 sudo[34279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:44 compute-0 python3.9[34281]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:44 compute-0 sudo[34279]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:45 compute-0 sudo[34402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyyumlxjchbxnvzflfffbpnqddizvsmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156324.2729025-322-119315401446941/AnsiballZ_copy.py'
Oct 11 04:18:45 compute-0 sudo[34402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:45 compute-0 python3.9[34404]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760156324.2729025-322-119315401446941/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:18:45 compute-0 sudo[34402]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:46 compute-0 sudo[34554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyzrabkofghyaczqxdhvnrtbroaruzzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156325.5423462-337-79927263904972/AnsiballZ_systemd.py'
Oct 11 04:18:46 compute-0 sudo[34554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:46 compute-0 python3.9[34556]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:18:46 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 11 04:18:46 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 11 04:18:46 compute-0 kernel: Bridge firewalling registered
Oct 11 04:18:46 compute-0 systemd-modules-load[34560]: Inserted module 'br_netfilter'
Oct 11 04:18:46 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 11 04:18:46 compute-0 sudo[34554]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:47 compute-0 sudo[34715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gureeirliwterdgllxtfrwvmwovfmxzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156326.9022033-345-112046314621988/AnsiballZ_stat.py'
Oct 11 04:18:47 compute-0 sudo[34715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:47 compute-0 python3.9[34717]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:47 compute-0 sudo[34715]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:47 compute-0 sudo[34838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bktmmnwbivgzscybqzoihaadrftzcwnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156326.9022033-345-112046314621988/AnsiballZ_copy.py'
Oct 11 04:18:47 compute-0 sudo[34838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:48 compute-0 python3.9[34840]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760156326.9022033-345-112046314621988/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:18:48 compute-0 sudo[34838]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:48 compute-0 sudo[34990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsvafsavqqcjeitmjuitevcllkidymjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156328.5479808-363-128823951371520/AnsiballZ_dnf.py'
Oct 11 04:18:48 compute-0 sudo[34990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:49 compute-0 python3.9[34992]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:18:52 compute-0 dbus-broker-launch[777]: Noticed file-system modification, trigger reload.
Oct 11 04:18:52 compute-0 dbus-broker-launch[777]: Noticed file-system modification, trigger reload.
Oct 11 04:18:52 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 04:18:52 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 11 04:18:52 compute-0 systemd[1]: Reloading.
Oct 11 04:18:52 compute-0 systemd-rc-local-generator[35054]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:18:52 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 04:18:53 compute-0 sudo[34990]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:54 compute-0 python3.9[36153]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:18:54 compute-0 python3.9[37012]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 11 04:18:55 compute-0 python3.9[37678]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:18:56 compute-0 sudo[38436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsdyuhvraupazzxhegjluahnlakavdku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156335.8702457-402-106724864973983/AnsiballZ_command.py'
Oct 11 04:18:56 compute-0 sudo[38436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:56 compute-0 python3.9[38471]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:18:56 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 11 04:18:56 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 11 04:18:57 compute-0 sudo[38436]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:57 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 04:18:57 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 11 04:18:57 compute-0 systemd[1]: man-db-cache-update.service: Consumed 6.025s CPU time.
Oct 11 04:18:57 compute-0 systemd[1]: run-r180109f59cf8445da8bf74c5444ad6e7.service: Deactivated successfully.
Oct 11 04:18:57 compute-0 sudo[39534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-askswccyfwngoexxnjgkrkrnibukkqui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156337.3035326-411-56614656284584/AnsiballZ_systemd.py'
Oct 11 04:18:57 compute-0 sudo[39534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:18:58 compute-0 python3.9[39536]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:18:58 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 11 04:18:58 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Oct 11 04:18:58 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 11 04:18:58 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 11 04:18:58 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 11 04:18:58 compute-0 sudo[39534]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:59 compute-0 python3.9[39698]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 11 04:19:01 compute-0 sudo[39848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evdvjujtmzhntdtzhqmhdrxuwrondtwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156340.8108742-468-211435547315951/AnsiballZ_systemd.py'
Oct 11 04:19:01 compute-0 sudo[39848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:01 compute-0 python3.9[39850]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:19:01 compute-0 systemd[1]: Reloading.
Oct 11 04:19:01 compute-0 systemd-rc-local-generator[39880]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:19:01 compute-0 sudo[39848]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:02 compute-0 sudo[40038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epofjjloxjchiwbyghopnjohyessepau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156342.0573578-468-127834332290792/AnsiballZ_systemd.py'
Oct 11 04:19:02 compute-0 sudo[40038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:02 compute-0 python3.9[40040]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:19:02 compute-0 systemd[1]: Reloading.
Oct 11 04:19:02 compute-0 systemd-rc-local-generator[40068]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:19:03 compute-0 sudo[40038]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:03 compute-0 sudo[40227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azkdplakxwffjmwdipkknqqsmxfjnrbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156343.2545497-484-225360896121769/AnsiballZ_command.py'
Oct 11 04:19:03 compute-0 sudo[40227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:03 compute-0 python3.9[40229]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:19:03 compute-0 sudo[40227]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:04 compute-0 sudo[40380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-genrykiuptdojaihaappvumxqsbfqvvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156343.922203-492-155837215233976/AnsiballZ_command.py'
Oct 11 04:19:04 compute-0 sudo[40380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:04 compute-0 python3.9[40382]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:19:04 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 11 04:19:04 compute-0 sudo[40380]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:04 compute-0 sudo[40533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqmbjpuqizmyesqbeinwwvivmjiqpkbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156344.6270478-500-151227695252364/AnsiballZ_command.py'
Oct 11 04:19:04 compute-0 sudo[40533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:05 compute-0 python3.9[40535]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:19:06 compute-0 sudo[40533]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:07 compute-0 sudo[40695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhlnqafihsgeskmwjgdzzjxnbqakyfqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156346.7926598-508-140488407518198/AnsiballZ_command.py'
Oct 11 04:19:07 compute-0 sudo[40695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:07 compute-0 python3.9[40697]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:19:07 compute-0 sudo[40695]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:07 compute-0 sudo[40848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikbmalvhlttjfruoavpnzswisqkzqnme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156347.4962106-516-175105798923100/AnsiballZ_systemd.py'
Oct 11 04:19:07 compute-0 sudo[40848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:08 compute-0 python3.9[40850]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:19:08 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 11 04:19:08 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Oct 11 04:19:08 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Oct 11 04:19:08 compute-0 systemd[1]: Starting Apply Kernel Variables...
Oct 11 04:19:08 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 11 04:19:08 compute-0 systemd[1]: Finished Apply Kernel Variables.
Oct 11 04:19:08 compute-0 sudo[40848]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:08 compute-0 sshd-session[27900]: Connection closed by 192.168.122.30 port 51498
Oct 11 04:19:08 compute-0 sshd-session[27897]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:19:08 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Oct 11 04:19:08 compute-0 systemd[1]: session-9.scope: Consumed 2min 13.481s CPU time.
Oct 11 04:19:08 compute-0 systemd-logind[801]: Session 9 logged out. Waiting for processes to exit.
Oct 11 04:19:08 compute-0 systemd-logind[801]: Removed session 9.
Oct 11 04:19:13 compute-0 sshd-session[40880]: Accepted publickey for zuul from 192.168.122.30 port 60692 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:19:13 compute-0 systemd-logind[801]: New session 10 of user zuul.
Oct 11 04:19:13 compute-0 systemd[1]: Started Session 10 of User zuul.
Oct 11 04:19:13 compute-0 sshd-session[40880]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:19:15 compute-0 python3.9[41033]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:19:16 compute-0 sudo[41187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfflkfpwowgrprhoetrtwwejapldhpfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156355.6340308-36-135041041045649/AnsiballZ_getent.py'
Oct 11 04:19:16 compute-0 sudo[41187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:16 compute-0 python3.9[41189]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 11 04:19:16 compute-0 sudo[41187]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:17 compute-0 sudo[41340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mekbouivkgggkhjaiotfhhdwfhlsyrqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156356.557207-44-60762786337844/AnsiballZ_group.py'
Oct 11 04:19:17 compute-0 sudo[41340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:17 compute-0 python3.9[41342]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 11 04:19:17 compute-0 groupadd[41343]: group added to /etc/group: name=openvswitch, GID=42476
Oct 11 04:19:17 compute-0 groupadd[41343]: group added to /etc/gshadow: name=openvswitch
Oct 11 04:19:17 compute-0 groupadd[41343]: new group: name=openvswitch, GID=42476
Oct 11 04:19:17 compute-0 sudo[41340]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:18 compute-0 sudo[41498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqgxzoendnexpdiisedddhbipsjdkwyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156357.6068535-52-168349952573620/AnsiballZ_user.py'
Oct 11 04:19:18 compute-0 sudo[41498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:18 compute-0 python3.9[41500]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 11 04:19:18 compute-0 useradd[41502]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Oct 11 04:19:18 compute-0 useradd[41502]: add 'openvswitch' to group 'hugetlbfs'
Oct 11 04:19:18 compute-0 useradd[41502]: add 'openvswitch' to shadow group 'hugetlbfs'
Oct 11 04:19:18 compute-0 sudo[41498]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:19 compute-0 sudo[41658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gapicrrrjqpleanuhbfxxfslqeetewho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156358.8653634-62-20034228929194/AnsiballZ_setup.py'
Oct 11 04:19:19 compute-0 sudo[41658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:19 compute-0 python3.9[41660]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:19:19 compute-0 sudo[41658]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:20 compute-0 sudo[41742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edltwsdymxmqkyefxbawcbxlybtzixzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156358.8653634-62-20034228929194/AnsiballZ_dnf.py'
Oct 11 04:19:20 compute-0 sudo[41742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:20 compute-0 python3.9[41744]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 11 04:19:22 compute-0 sudo[41742]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:23 compute-0 sudo[41905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvlwoyhfdsqovaxkpvswthfdfephhtvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156362.786731-76-187901671768914/AnsiballZ_dnf.py'
Oct 11 04:19:23 compute-0 sudo[41905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:23 compute-0 python3.9[41907]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:19:34 compute-0 kernel: SELinux:  Converting 2724 SID table entries...
Oct 11 04:19:34 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 04:19:34 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 11 04:19:34 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 04:19:34 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 11 04:19:34 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 04:19:34 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 04:19:34 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 04:19:34 compute-0 groupadd[41930]: group added to /etc/group: name=unbound, GID=993
Oct 11 04:19:34 compute-0 groupadd[41930]: group added to /etc/gshadow: name=unbound
Oct 11 04:19:34 compute-0 groupadd[41930]: new group: name=unbound, GID=993
Oct 11 04:19:34 compute-0 useradd[41937]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Oct 11 04:19:34 compute-0 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct 11 04:19:34 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 11 04:19:35 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 04:19:36 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 11 04:19:36 compute-0 systemd[1]: Reloading.
Oct 11 04:19:36 compute-0 systemd-rc-local-generator[42426]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:19:36 compute-0 systemd-sysv-generator[42434]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:19:36 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 04:19:36 compute-0 sudo[41905]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:36 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 04:19:36 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 11 04:19:36 compute-0 systemd[1]: run-r3f4986c3790245679930e48a319bf8c9.service: Deactivated successfully.
Oct 11 04:19:37 compute-0 sudo[43007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thpflsccadcppfwgiviayrvyxxyawaia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156377.023681-84-61039004606203/AnsiballZ_systemd.py'
Oct 11 04:19:37 compute-0 sudo[43007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:37 compute-0 python3.9[43009]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 04:19:38 compute-0 systemd[1]: Reloading.
Oct 11 04:19:38 compute-0 systemd-rc-local-generator[43033]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:19:38 compute-0 systemd-sysv-generator[43039]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:19:38 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Oct 11 04:19:38 compute-0 chown[43052]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 11 04:19:38 compute-0 ovs-ctl[43057]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct 11 04:19:38 compute-0 ovs-ctl[43057]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct 11 04:19:38 compute-0 ovs-ctl[43057]: Starting ovsdb-server [  OK  ]
Oct 11 04:19:38 compute-0 ovs-vsctl[43106]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 11 04:19:38 compute-0 ovs-vsctl[43126]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"2ff6420e-86e1-487c-bef9-adac80b75ae0\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 11 04:19:38 compute-0 ovs-ctl[43057]: Configuring Open vSwitch system IDs [  OK  ]
Oct 11 04:19:38 compute-0 ovs-vsctl[43132]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 11 04:19:38 compute-0 ovs-ctl[43057]: Enabling remote OVSDB managers [  OK  ]
Oct 11 04:19:38 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Oct 11 04:19:38 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 11 04:19:38 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 11 04:19:38 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 11 04:19:38 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Oct 11 04:19:38 compute-0 ovs-ctl[43177]: Inserting openvswitch module [  OK  ]
Oct 11 04:19:39 compute-0 ovs-ctl[43146]: Starting ovs-vswitchd [  OK  ]
Oct 11 04:19:39 compute-0 ovs-ctl[43146]: Enabling remote OVSDB managers [  OK  ]
Oct 11 04:19:39 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 11 04:19:39 compute-0 ovs-vsctl[43197]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 11 04:19:39 compute-0 systemd[1]: Starting Open vSwitch...
Oct 11 04:19:39 compute-0 systemd[1]: Finished Open vSwitch.
Oct 11 04:19:39 compute-0 sudo[43007]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:40 compute-0 python3.9[43349]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:19:40 compute-0 sudo[43499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhtrnqphfdbcnsedhzoswpwwyumwzxev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156380.2076662-102-232226266015685/AnsiballZ_sefcontext.py'
Oct 11 04:19:40 compute-0 sudo[43499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:41 compute-0 python3.9[43501]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 11 04:19:42 compute-0 kernel: SELinux:  Converting 2738 SID table entries...
Oct 11 04:19:42 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 04:19:42 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 11 04:19:42 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 04:19:42 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 11 04:19:42 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 04:19:42 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 04:19:42 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 04:19:42 compute-0 sudo[43499]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:43 compute-0 python3.9[43656]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:19:43 compute-0 sudo[43812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmvonjqrmtzpvlpolqrktqvbakptdskc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156383.6362865-120-259166906478875/AnsiballZ_dnf.py'
Oct 11 04:19:43 compute-0 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct 11 04:19:43 compute-0 sudo[43812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:44 compute-0 python3.9[43814]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:19:45 compute-0 sudo[43812]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:46 compute-0 sudo[43965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybmoampvubgmkhgsaawklqsftutfqkme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156385.5561526-128-230586834349208/AnsiballZ_command.py'
Oct 11 04:19:46 compute-0 sudo[43965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:46 compute-0 python3.9[43967]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:19:47 compute-0 sudo[43965]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:47 compute-0 sudo[44252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtivrjmdfrxpltpbzctdukmqsnolibuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156387.3658798-136-64679598164817/AnsiballZ_file.py'
Oct 11 04:19:47 compute-0 sudo[44252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:48 compute-0 python3.9[44254]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 11 04:19:48 compute-0 sudo[44252]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:49 compute-0 python3.9[44404]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:19:49 compute-0 sudo[44556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scjbtwtjtvfqbvmxzcoyauctjazneloi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156389.2789838-152-245410118194110/AnsiballZ_dnf.py'
Oct 11 04:19:49 compute-0 sudo[44556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:49 compute-0 python3.9[44558]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:19:51 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 04:19:51 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 11 04:19:51 compute-0 systemd[1]: Reloading.
Oct 11 04:19:51 compute-0 systemd-rc-local-generator[44596]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:19:51 compute-0 systemd-sysv-generator[44601]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:19:51 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 04:19:52 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 04:19:52 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 11 04:19:52 compute-0 systemd[1]: run-r4b5b88bedaad42b2bc2eafbb578a8c2f.service: Deactivated successfully.
Oct 11 04:19:52 compute-0 sudo[44556]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:52 compute-0 sudo[44873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwdmatawsubcsgwdyshwsomduwvuiglb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156392.3567812-160-35706562253659/AnsiballZ_systemd.py'
Oct 11 04:19:52 compute-0 sudo[44873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:53 compute-0 python3.9[44875]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:19:53 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 11 04:19:53 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Oct 11 04:19:53 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Oct 11 04:19:53 compute-0 NetworkManager[3958]: <info>  [1760156393.0881] caught SIGTERM, shutting down normally.
Oct 11 04:19:53 compute-0 systemd[1]: Stopping Network Manager...
Oct 11 04:19:53 compute-0 NetworkManager[3958]: <info>  [1760156393.0894] dhcp4 (eth0): canceled DHCP transaction
Oct 11 04:19:53 compute-0 NetworkManager[3958]: <info>  [1760156393.0895] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 11 04:19:53 compute-0 NetworkManager[3958]: <info>  [1760156393.0895] dhcp4 (eth0): state changed no lease
Oct 11 04:19:53 compute-0 NetworkManager[3958]: <info>  [1760156393.0897] manager: NetworkManager state is now CONNECTED_SITE
Oct 11 04:19:53 compute-0 NetworkManager[3958]: <info>  [1760156393.0992] exiting (success)
Oct 11 04:19:53 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 04:19:53 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 04:19:53 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 11 04:19:53 compute-0 systemd[1]: Stopped Network Manager.
Oct 11 04:19:53 compute-0 systemd[1]: NetworkManager.service: Consumed 8.874s CPU time, 4.3M memory peak, read 0B from disk, written 39.0K to disk.
Oct 11 04:19:53 compute-0 systemd[1]: Starting Network Manager...
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.2087] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:c8da26f4-0310-49ac-b50e-f03e67e8ef1f)
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.2090] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.2148] manager[0x55c80c403090]: monitoring kernel firmware directory '/lib/firmware'.
Oct 11 04:19:53 compute-0 systemd[1]: Starting Hostname Service...
Oct 11 04:19:53 compute-0 systemd[1]: Started Hostname Service.
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3232] hostname: hostname: using hostnamed
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3232] hostname: static hostname changed from (none) to "compute-0"
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3238] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3241] manager[0x55c80c403090]: rfkill: Wi-Fi hardware radio set enabled
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3242] manager[0x55c80c403090]: rfkill: WWAN hardware radio set enabled
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3260] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3267] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3268] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3268] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3269] manager: Networking is enabled by state file
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3271] settings: Loaded settings plugin: keyfile (internal)
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3274] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3296] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3308] dhcp: init: Using DHCP client 'internal'
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3310] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3315] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3321] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3328] device (lo): Activation: starting connection 'lo' (070fa9fc-0387-45d5-a597-36836db223c7)
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3333] device (eth0): carrier: link connected
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3337] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3340] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3340] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3345] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3351] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3356] device (eth1): carrier: link connected
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3359] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3363] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (f8b03ef6-0026-501a-9e18-7db335cb7a5f) (indicated)
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3364] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3369] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3375] device (eth1): Activation: starting connection 'ci-private-network' (f8b03ef6-0026-501a-9e18-7db335cb7a5f)
Oct 11 04:19:53 compute-0 systemd[1]: Started Network Manager.
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3382] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3389] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3391] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3393] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3408] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3413] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3417] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3420] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3428] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3438] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3442] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3457] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3479] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3500] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3501] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3505] device (lo): Activation: successful, device activated.
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3525] dhcp4 (eth0): state changed new lease, address=38.102.83.148
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3532] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 11 04:19:53 compute-0 systemd[1]: Starting Network Manager Wait Online...
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3601] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3606] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3611] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3613] manager: NetworkManager state is now CONNECTED_LOCAL
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3615] device (eth1): Activation: successful, device activated.
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3626] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3627] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3630] manager: NetworkManager state is now CONNECTED_SITE
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3633] device (eth0): Activation: successful, device activated.
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3638] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 11 04:19:53 compute-0 NetworkManager[44888]: <info>  [1760156393.3641] manager: startup complete
Oct 11 04:19:53 compute-0 sudo[44873]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:53 compute-0 systemd[1]: Finished Network Manager Wait Online.
Oct 11 04:19:53 compute-0 sudo[45099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcdzkbscotgavvfocwvigixfvpkllysl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156393.5692658-168-250257635274288/AnsiballZ_dnf.py'
Oct 11 04:19:53 compute-0 sudo[45099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:19:54 compute-0 python3.9[45101]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:19:58 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 04:19:58 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 11 04:19:59 compute-0 systemd[1]: Reloading.
Oct 11 04:19:59 compute-0 systemd-rc-local-generator[45153]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:19:59 compute-0 systemd-sysv-generator[45157]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:19:59 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 04:19:59 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 04:19:59 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 11 04:19:59 compute-0 systemd[1]: run-rc0d252d14e304e19becdf305f7ae94f4.service: Deactivated successfully.
Oct 11 04:20:00 compute-0 sudo[45099]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:00 compute-0 sudo[45562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfkutnieznyeqsahloztyninegowfbrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156400.357121-180-125750074469422/AnsiballZ_stat.py'
Oct 11 04:20:00 compute-0 sudo[45562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:00 compute-0 python3.9[45564]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:20:00 compute-0 sudo[45562]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:01 compute-0 sudo[45714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-partztnyfzbdcbxwnuguwmmdwdortwsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156401.0911338-189-133282668880946/AnsiballZ_ini_file.py'
Oct 11 04:20:01 compute-0 sudo[45714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:01 compute-0 python3.9[45716]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:01 compute-0 sudo[45714]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:02 compute-0 sudo[45868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbljrposubndopastcdmvkesmlixjqrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156402.1632805-199-216824205571774/AnsiballZ_ini_file.py'
Oct 11 04:20:02 compute-0 sudo[45868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:02 compute-0 python3.9[45870]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:02 compute-0 sudo[45868]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:03 compute-0 sudo[46020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxdlewqclvjcfdaynpbsorilxympdvyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156402.8809218-199-20595356830446/AnsiballZ_ini_file.py'
Oct 11 04:20:03 compute-0 sudo[46020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:03 compute-0 python3.9[46022]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:03 compute-0 sudo[46020]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:03 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 04:20:03 compute-0 sudo[46172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmlbjiojxgihjgrbzvnhewudzmvjcyiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156403.5142505-214-173480409068629/AnsiballZ_ini_file.py'
Oct 11 04:20:03 compute-0 sudo[46172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:03 compute-0 python3.9[46174]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:03 compute-0 sudo[46172]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:04 compute-0 sudo[46324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tikmvlbzyzjgtoligjiwhjiibbgaxbch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156404.138033-214-232048663865354/AnsiballZ_ini_file.py'
Oct 11 04:20:04 compute-0 sudo[46324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:04 compute-0 python3.9[46326]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:04 compute-0 sudo[46324]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:05 compute-0 sudo[46476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptcifkrwpewbrxjoxpwohmvejhqifkkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156404.9167736-229-97034468796768/AnsiballZ_stat.py'
Oct 11 04:20:05 compute-0 sudo[46476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:05 compute-0 python3.9[46478]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:20:05 compute-0 sudo[46476]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:06 compute-0 sudo[46599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vobmpaooabwpemweulyglgwbdevgybco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156404.9167736-229-97034468796768/AnsiballZ_copy.py'
Oct 11 04:20:06 compute-0 sudo[46599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:06 compute-0 python3.9[46601]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760156404.9167736-229-97034468796768/.source _original_basename=.nntssxd1 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:06 compute-0 sudo[46599]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:06 compute-0 sudo[46751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbcrgipknfpkmrfghoxgfhfpqtcsnqne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156406.427467-244-249959200626427/AnsiballZ_file.py'
Oct 11 04:20:06 compute-0 sudo[46751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:07 compute-0 python3.9[46753]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:07 compute-0 sudo[46751]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:07 compute-0 sudo[46903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cobwwpximybzxpnyujgvqbutyxhqzcni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156407.2582297-252-186752243346795/AnsiballZ_edpm_os_net_config_mappings.py'
Oct 11 04:20:07 compute-0 sudo[46903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:08 compute-0 python3.9[46905]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct 11 04:20:08 compute-0 sudo[46903]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:08 compute-0 sudo[47055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ossokyhqaadodmfpcbdquysjnmydemeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156408.269984-261-40672915723516/AnsiballZ_file.py'
Oct 11 04:20:08 compute-0 sudo[47055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:08 compute-0 python3.9[47057]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:08 compute-0 sudo[47055]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:09 compute-0 sudo[47207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uskkxpmfmxrdjutzjmeisepvxmczhhuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156409.1682894-271-217773514077714/AnsiballZ_stat.py'
Oct 11 04:20:09 compute-0 sudo[47207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:09 compute-0 sudo[47207]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:10 compute-0 sudo[47330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkgshxizzwftdrmtvjezznwvkpajhpjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156409.1682894-271-217773514077714/AnsiballZ_copy.py'
Oct 11 04:20:10 compute-0 sudo[47330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:10 compute-0 sudo[47330]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:10 compute-0 sudo[47482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lombomwasgsglhltmgrdhtrvgrpbsocr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156410.5197268-286-125343039923130/AnsiballZ_slurp.py'
Oct 11 04:20:10 compute-0 sudo[47482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:11 compute-0 python3.9[47484]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct 11 04:20:11 compute-0 sudo[47482]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:12 compute-0 sudo[47657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnzjupbjivcdaiepcuudfgezbhmkvwxk ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156411.477611-295-173602939826758/async_wrapper.py j524702211702 300 /home/zuul/.ansible/tmp/ansible-tmp-1760156411.477611-295-173602939826758/AnsiballZ_edpm_os_net_config.py _'
Oct 11 04:20:12 compute-0 sudo[47657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:12 compute-0 ansible-async_wrapper.py[47659]: Invoked with j524702211702 300 /home/zuul/.ansible/tmp/ansible-tmp-1760156411.477611-295-173602939826758/AnsiballZ_edpm_os_net_config.py _
Oct 11 04:20:12 compute-0 ansible-async_wrapper.py[47662]: Starting module and watcher
Oct 11 04:20:12 compute-0 ansible-async_wrapper.py[47662]: Start watching 47663 (300)
Oct 11 04:20:12 compute-0 ansible-async_wrapper.py[47663]: Start module (47663)
Oct 11 04:20:12 compute-0 ansible-async_wrapper.py[47659]: Return async_wrapper task started.
Oct 11 04:20:12 compute-0 sudo[47657]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:12 compute-0 python3.9[47664]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct 11 04:20:13 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 11 04:20:13 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 11 04:20:13 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 11 04:20:13 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 11 04:20:13 compute-0 kernel: cfg80211: failed to load regulatory.db
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.7788] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.7813] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8563] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8565] audit: op="connection-add" uuid="0324f0ac-8632-4f10-88ed-45a7086ed248" name="br-ex-br" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8586] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8588] audit: op="connection-add" uuid="c596b389-0871-46de-86a5-75076d3b062a" name="br-ex-port" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8600] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8601] audit: op="connection-add" uuid="c283db93-c56b-472c-94c9-a60af5cfc102" name="eth1-port" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8614] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8616] audit: op="connection-add" uuid="522c0653-7a3f-4847-a045-074f7b2f106d" name="vlan20-port" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8628] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8629] audit: op="connection-add" uuid="853132e0-4ecd-424e-a473-f6a66c089ead" name="vlan21-port" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8641] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8642] audit: op="connection-add" uuid="45603131-65e4-4fc2-b4a1-0d06d5bd404d" name="vlan22-port" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8655] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8657] audit: op="connection-add" uuid="a2cced46-b07c-403e-ae97-977e382c15f3" name="vlan23-port" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8681] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,connection.timestamp,connection.autoconnect-priority" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8700] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8701] audit: op="connection-add" uuid="74716919-d51e-462a-8c09-2367f57a6930" name="br-ex-if" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8752] audit: op="connection-update" uuid="f8b03ef6-0026-501a-9e18-7db335cb7a5f" name="ci-private-network" args="ipv6.routing-rules,ipv6.routes,ipv6.addr-gen-mode,ipv6.dns,ipv6.addresses,ipv6.method,ovs-interface.type,ipv4.routing-rules,ipv4.routes,ipv4.method,ipv4.dns,ipv4.addresses,ipv4.never-default,ovs-external-ids.data,connection.controller,connection.master,connection.port-type,connection.slave-type,connection.timestamp" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8774] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8777] audit: op="connection-add" uuid="45128094-f8fc-4ffa-a7f8-a3549531c60e" name="vlan20-if" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8797] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8799] audit: op="connection-add" uuid="0f19ce72-5478-4ee6-ae28-398dc10aecdb" name="vlan21-if" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8819] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8820] audit: op="connection-add" uuid="f460f4f2-fc0d-43d6-b221-d9e6e615560e" name="vlan22-if" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8838] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8839] audit: op="connection-add" uuid="d667576a-b3d5-4d11-8153-258c0756abe8" name="vlan23-if" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8849] audit: op="connection-delete" uuid="a3cf2323-8650-312c-87b8-0254e3de1e75" name="Wired connection 1" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8862] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8875] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8879] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (0324f0ac-8632-4f10-88ed-45a7086ed248)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8880] audit: op="connection-activate" uuid="0324f0ac-8632-4f10-88ed-45a7086ed248" name="br-ex-br" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8881] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8888] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8893] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (c596b389-0871-46de-86a5-75076d3b062a)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8895] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8901] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8906] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (c283db93-c56b-472c-94c9-a60af5cfc102)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8908] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8915] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8919] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (522c0653-7a3f-4847-a045-074f7b2f106d)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8921] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8928] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8932] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (853132e0-4ecd-424e-a473-f6a66c089ead)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8933] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8939] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8944] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (45603131-65e4-4fc2-b4a1-0d06d5bd404d)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8945] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8952] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8956] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (a2cced46-b07c-403e-ae97-977e382c15f3)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8957] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8959] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8961] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8968] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8972] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8976] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (74716919-d51e-462a-8c09-2367f57a6930)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8977] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8980] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8982] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8983] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8984] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8996] device (eth1): disconnecting for new activation request.
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8996] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.8999] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9001] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9002] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9005] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9009] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9013] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (45128094-f8fc-4ffa-a7f8-a3549531c60e)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9014] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9017] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9018] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9020] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9023] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9027] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9031] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (0f19ce72-5478-4ee6-ae28-398dc10aecdb)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9032] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9036] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9038] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9040] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9044] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9049] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9054] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (f460f4f2-fc0d-43d6-b221-d9e6e615560e)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9055] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9057] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9059] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9060] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9063] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9067] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9071] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (d667576a-b3d5-4d11-8153-258c0756abe8)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9072] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9075] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9077] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9078] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9080] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9093] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,connection.autoconnect-priority" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9095] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9098] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9100] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9107] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9111] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9115] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9119] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9120] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9126] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 kernel: ovs-system: entered promiscuous mode
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9140] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9144] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9147] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9152] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9157] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9160] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 systemd-udevd[47671]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:20:14 compute-0 kernel: Timeout policy base is empty
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9162] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9167] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9171] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9174] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9177] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9182] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9187] dhcp4 (eth0): canceled DHCP transaction
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9188] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9188] dhcp4 (eth0): state changed no lease
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9191] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9204] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9208] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47665 uid=0 result="fail" reason="Device is not activated"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9212] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9222] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9232] device (eth1): disconnecting for new activation request.
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9236] audit: op="connection-activate" uuid="f8b03ef6-0026-501a-9e18-7db335cb7a5f" name="ci-private-network" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9238] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9248] dhcp4 (eth0): state changed new lease, address=38.102.83.148
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9254] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct 11 04:20:14 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9318] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47665 uid=0 result="success"
Oct 11 04:20:14 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9420] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9519] device (eth1): Activation: starting connection 'ci-private-network' (f8b03ef6-0026-501a-9e18-7db335cb7a5f)
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9530] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9567] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9584] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9591] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9595] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9601] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9603] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9604] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9605] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9607] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9608] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9612] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9617] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9621] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9626] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9629] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9633] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9637] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9641] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9644] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 11 04:20:14 compute-0 kernel: br-ex: entered promiscuous mode
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9656] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9662] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9666] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9670] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9676] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9682] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9760] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 11 04:20:14 compute-0 kernel: vlan22: entered promiscuous mode
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9769] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 systemd-udevd[47670]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9777] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9785] device (eth1): Activation: successful, device activated.
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9811] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 kernel: vlan21: entered promiscuous mode
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9832] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9834] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9841] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 11 04:20:14 compute-0 kernel: vlan23: entered promiscuous mode
Oct 11 04:20:14 compute-0 systemd-udevd[47669]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9937] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9947] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9959] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9982] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 11 04:20:14 compute-0 kernel: vlan20: entered promiscuous mode
Oct 11 04:20:14 compute-0 NetworkManager[44888]: <info>  [1760156414.9985] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:20:15 compute-0 NetworkManager[44888]: <info>  [1760156415.0015] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:20:15 compute-0 NetworkManager[44888]: <info>  [1760156415.0054] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:20:15 compute-0 NetworkManager[44888]: <info>  [1760156415.0055] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:20:15 compute-0 NetworkManager[44888]: <info>  [1760156415.0061] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 11 04:20:15 compute-0 NetworkManager[44888]: <info>  [1760156415.0075] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:20:15 compute-0 NetworkManager[44888]: <info>  [1760156415.0080] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 11 04:20:15 compute-0 NetworkManager[44888]: <info>  [1760156415.0080] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:20:15 compute-0 NetworkManager[44888]: <info>  [1760156415.0083] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:20:15 compute-0 NetworkManager[44888]: <info>  [1760156415.0089] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 11 04:20:15 compute-0 NetworkManager[44888]: <info>  [1760156415.0096] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:20:15 compute-0 NetworkManager[44888]: <info>  [1760156415.0103] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 11 04:20:15 compute-0 NetworkManager[44888]: <info>  [1760156415.0124] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:20:15 compute-0 NetworkManager[44888]: <info>  [1760156415.0164] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:20:15 compute-0 NetworkManager[44888]: <info>  [1760156415.0166] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:20:15 compute-0 NetworkManager[44888]: <info>  [1760156415.0172] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 11 04:20:16 compute-0 NetworkManager[44888]: <info>  [1760156416.1878] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47665 uid=0 result="success"
Oct 11 04:20:16 compute-0 sudo[48020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcvdapfriravehidktenuctnekistopj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156415.6753762-295-237258944919474/AnsiballZ_async_status.py'
Oct 11 04:20:16 compute-0 sudo[48020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:16 compute-0 NetworkManager[44888]: <info>  [1760156416.3784] checkpoint[0x55c80c3d8950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct 11 04:20:16 compute-0 NetworkManager[44888]: <info>  [1760156416.3786] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47665 uid=0 result="success"
Oct 11 04:20:16 compute-0 python3.9[48022]: ansible-ansible.legacy.async_status Invoked with jid=j524702211702.47659 mode=status _async_dir=/root/.ansible_async
Oct 11 04:20:16 compute-0 sudo[48020]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:16 compute-0 NetworkManager[44888]: <info>  [1760156416.8426] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47665 uid=0 result="success"
Oct 11 04:20:16 compute-0 NetworkManager[44888]: <info>  [1760156416.8452] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47665 uid=0 result="success"
Oct 11 04:20:17 compute-0 NetworkManager[44888]: <info>  [1760156417.0885] audit: op="networking-control" arg="global-dns-configuration" pid=47665 uid=0 result="success"
Oct 11 04:20:17 compute-0 NetworkManager[44888]: <info>  [1760156417.0930] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct 11 04:20:17 compute-0 NetworkManager[44888]: <info>  [1760156417.0961] audit: op="networking-control" arg="global-dns-configuration" pid=47665 uid=0 result="success"
Oct 11 04:20:17 compute-0 NetworkManager[44888]: <info>  [1760156417.0992] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47665 uid=0 result="success"
Oct 11 04:20:17 compute-0 NetworkManager[44888]: <info>  [1760156417.2340] checkpoint[0x55c80c3d8a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct 11 04:20:17 compute-0 NetworkManager[44888]: <info>  [1760156417.2343] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47665 uid=0 result="success"
Oct 11 04:20:17 compute-0 ansible-async_wrapper.py[47663]: Module complete (47663)
Oct 11 04:20:17 compute-0 ansible-async_wrapper.py[47662]: Done in kid B.
Oct 11 04:20:19 compute-0 sudo[48126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfsmpvlsdgqniaffdkcyzzkyvzbrqwye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156415.6753762-295-237258944919474/AnsiballZ_async_status.py'
Oct 11 04:20:19 compute-0 sudo[48126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:19 compute-0 python3.9[48128]: ansible-ansible.legacy.async_status Invoked with jid=j524702211702.47659 mode=status _async_dir=/root/.ansible_async
Oct 11 04:20:20 compute-0 sudo[48126]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:20 compute-0 sudo[48226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdijyjjwwdjegsrmuvfzphudkuvirwmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156415.6753762-295-237258944919474/AnsiballZ_async_status.py'
Oct 11 04:20:20 compute-0 sudo[48226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:20 compute-0 python3.9[48228]: ansible-ansible.legacy.async_status Invoked with jid=j524702211702.47659 mode=cleanup _async_dir=/root/.ansible_async
Oct 11 04:20:20 compute-0 sudo[48226]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:21 compute-0 sudo[48378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwxuolakquhtfllbdllncqoygeidepgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156420.8264647-322-225321983083694/AnsiballZ_stat.py'
Oct 11 04:20:21 compute-0 sudo[48378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:21 compute-0 python3.9[48380]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:20:21 compute-0 sudo[48378]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:21 compute-0 sudo[48501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuhbovpidnuodnkkfghxgxxfheuqfwuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156420.8264647-322-225321983083694/AnsiballZ_copy.py'
Oct 11 04:20:21 compute-0 sudo[48501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:22 compute-0 python3.9[48503]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760156420.8264647-322-225321983083694/.source.returncode _original_basename=.pcj9krxb follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:22 compute-0 sudo[48501]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:22 compute-0 sudo[48653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxoxlnswekfhgmbttbuxhgypvjxgugfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156422.3151548-338-205893488859831/AnsiballZ_stat.py'
Oct 11 04:20:22 compute-0 sudo[48653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:22 compute-0 python3.9[48655]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:20:22 compute-0 sudo[48653]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:23 compute-0 sudo[48777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwwuejgqdxjfklocsasmcloxvvsawyex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156422.3151548-338-205893488859831/AnsiballZ_copy.py'
Oct 11 04:20:23 compute-0 sudo[48777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:23 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 11 04:20:23 compute-0 python3.9[48779]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760156422.3151548-338-205893488859831/.source.cfg _original_basename=.b1a6d0d1 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:23 compute-0 sudo[48777]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:24 compute-0 sudo[48932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdfstyplhrwhwuvgesnzlccxdifbokxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156423.6892464-353-49309647788908/AnsiballZ_systemd.py'
Oct 11 04:20:24 compute-0 sudo[48932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:24 compute-0 python3.9[48934]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:20:24 compute-0 systemd[1]: Reloading Network Manager...
Oct 11 04:20:24 compute-0 NetworkManager[44888]: <info>  [1760156424.4516] audit: op="reload" arg="0" pid=48938 uid=0 result="success"
Oct 11 04:20:24 compute-0 NetworkManager[44888]: <info>  [1760156424.4530] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct 11 04:20:24 compute-0 systemd[1]: Reloaded Network Manager.
Oct 11 04:20:24 compute-0 sudo[48932]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:24 compute-0 sshd-session[40883]: Connection closed by 192.168.122.30 port 60692
Oct 11 04:20:24 compute-0 sshd-session[40880]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:20:24 compute-0 systemd-logind[801]: Session 10 logged out. Waiting for processes to exit.
Oct 11 04:20:24 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Oct 11 04:20:24 compute-0 systemd[1]: session-10.scope: Consumed 53.460s CPU time.
Oct 11 04:20:24 compute-0 systemd-logind[801]: Removed session 10.
Oct 11 04:20:29 compute-0 sshd-session[48969]: Accepted publickey for zuul from 192.168.122.30 port 35236 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:20:29 compute-0 systemd-logind[801]: New session 11 of user zuul.
Oct 11 04:20:29 compute-0 systemd[1]: Started Session 11 of User zuul.
Oct 11 04:20:29 compute-0 sshd-session[48969]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:20:30 compute-0 python3.9[49122]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:20:32 compute-0 python3.9[49276]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:20:33 compute-0 python3.9[49470]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:20:33 compute-0 sshd-session[48972]: Connection closed by 192.168.122.30 port 35236
Oct 11 04:20:33 compute-0 sshd-session[48969]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:20:33 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Oct 11 04:20:33 compute-0 systemd[1]: session-11.scope: Consumed 2.654s CPU time.
Oct 11 04:20:33 compute-0 systemd-logind[801]: Session 11 logged out. Waiting for processes to exit.
Oct 11 04:20:33 compute-0 systemd-logind[801]: Removed session 11.
Oct 11 04:20:34 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 04:20:39 compute-0 sshd-session[49499]: Accepted publickey for zuul from 192.168.122.30 port 58256 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:20:39 compute-0 systemd-logind[801]: New session 12 of user zuul.
Oct 11 04:20:39 compute-0 systemd[1]: Started Session 12 of User zuul.
Oct 11 04:20:39 compute-0 sshd-session[49499]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:20:40 compute-0 python3.9[49652]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:20:41 compute-0 python3.9[49806]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:20:42 compute-0 sudo[49960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmgllhhnyubrbptexcmajvfyfuxqnblh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156441.7862246-40-184675974894311/AnsiballZ_setup.py'
Oct 11 04:20:42 compute-0 sudo[49960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:42 compute-0 python3.9[49962]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:20:42 compute-0 sudo[49960]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:43 compute-0 sudo[50045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhovsehwsynaasqclbicrpdpyrmrlcmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156441.7862246-40-184675974894311/AnsiballZ_dnf.py'
Oct 11 04:20:43 compute-0 sudo[50045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:43 compute-0 python3.9[50047]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:20:44 compute-0 sudo[50045]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:45 compute-0 sudo[50198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-docrznxplijujqzgreiethgqbyfgueyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156444.775041-52-277081854446540/AnsiballZ_setup.py'
Oct 11 04:20:45 compute-0 sudo[50198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:45 compute-0 python3.9[50200]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:20:45 compute-0 sudo[50198]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:46 compute-0 sudo[50394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfjfbkiovvpndpxnaoeillvmyarvypgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156446.1103597-63-116105804508499/AnsiballZ_file.py'
Oct 11 04:20:46 compute-0 sudo[50394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:46 compute-0 python3.9[50396]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:46 compute-0 sudo[50394]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:47 compute-0 sudo[50546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftltgkwatnukvejrrmnltpjktkiglfva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156447.0404801-71-165552614087415/AnsiballZ_command.py'
Oct 11 04:20:47 compute-0 sudo[50546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:47 compute-0 python3.9[50548]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:20:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2665208819-merged.mount: Deactivated successfully.
Oct 11 04:20:47 compute-0 podman[50549]: 2025-10-11 04:20:47.8615812 +0000 UTC m=+0.065821707 system refresh
Oct 11 04:20:47 compute-0 sudo[50546]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:48 compute-0 sudo[50710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xifbbbrwxygtbkkcmuvwhzelyzfqagvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156448.0372083-79-219528933289601/AnsiballZ_stat.py'
Oct 11 04:20:48 compute-0 sudo[50710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:48 compute-0 python3.9[50712]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:20:48 compute-0 sudo[50710]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:48 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:20:49 compute-0 sudo[50833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxvmebmvgctykxuqhtmmiqsapbvynisa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156448.0372083-79-219528933289601/AnsiballZ_copy.py'
Oct 11 04:20:49 compute-0 sudo[50833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:49 compute-0 python3.9[50835]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760156448.0372083-79-219528933289601/.source.json follow=False _original_basename=podman_network_config.j2 checksum=4c86bb567fdb1739ef26519d3061815fdf776e5b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:49 compute-0 sudo[50833]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:50 compute-0 sudo[50985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqvwynednvsxjedcdfqzkazdlfibsksp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156449.7931519-94-154861217660534/AnsiballZ_stat.py'
Oct 11 04:20:50 compute-0 sudo[50985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:50 compute-0 python3.9[50987]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:20:50 compute-0 sudo[50985]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:50 compute-0 sudo[51108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxnabhixsuqtxuoxhvqlbhfajysxetne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156449.7931519-94-154861217660534/AnsiballZ_copy.py'
Oct 11 04:20:50 compute-0 sudo[51108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:51 compute-0 python3.9[51110]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760156449.7931519-94-154861217660534/.source.conf follow=False _original_basename=registries.conf.j2 checksum=888b975826b2c6c0439200ce8ac9219b96c0abdf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:20:51 compute-0 sudo[51108]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:51 compute-0 sudo[51260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jchkikqtlphsrqrzkiyssxoxqfwljmms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156451.276452-110-204273389581608/AnsiballZ_ini_file.py'
Oct 11 04:20:51 compute-0 sudo[51260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:52 compute-0 python3.9[51262]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:20:52 compute-0 sudo[51260]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:52 compute-0 sudo[51412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiwiroclkxsfdssjbbvlylrnczobgnpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156452.1957278-110-10160712529329/AnsiballZ_ini_file.py'
Oct 11 04:20:52 compute-0 sudo[51412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:52 compute-0 python3.9[51414]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:20:52 compute-0 sudo[51412]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:53 compute-0 sudo[51564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwoncmwvzchclpmjvdqepigqicvbbmym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156452.908448-110-14583235975322/AnsiballZ_ini_file.py'
Oct 11 04:20:53 compute-0 sudo[51564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:53 compute-0 python3.9[51566]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:20:53 compute-0 sudo[51564]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:53 compute-0 sudo[51716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wizoiaawiuazqowciqmxlskrigctvzzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156453.574275-110-140014176430888/AnsiballZ_ini_file.py'
Oct 11 04:20:53 compute-0 sudo[51716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:54 compute-0 python3.9[51718]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:20:54 compute-0 sudo[51716]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:54 compute-0 sudo[51868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcansxjorjxlkgzgzrjqwhgbjppugdic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156454.4108043-141-114788099309252/AnsiballZ_dnf.py'
Oct 11 04:20:54 compute-0 sudo[51868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:55 compute-0 python3.9[51870]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:20:56 compute-0 sudo[51868]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:56 compute-0 sudo[52021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycdnusbyyqhkqrckhrvtjqqlhcranjuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156456.5156245-152-190733760262356/AnsiballZ_setup.py'
Oct 11 04:20:56 compute-0 sudo[52021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:57 compute-0 python3.9[52023]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:20:57 compute-0 sudo[52021]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:57 compute-0 sudo[52175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvwafwqjwffxfipskgdyjhvgdzldcqnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156457.3744512-160-132699652754464/AnsiballZ_stat.py'
Oct 11 04:20:57 compute-0 sudo[52175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:57 compute-0 python3.9[52177]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:20:57 compute-0 sudo[52175]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:58 compute-0 sudo[52327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfjoxeirujpfnptnrloolbhmlfqdifxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156458.2571597-169-230032788046019/AnsiballZ_stat.py'
Oct 11 04:20:58 compute-0 sudo[52327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:58 compute-0 python3.9[52329]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:20:58 compute-0 sudo[52327]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:59 compute-0 sudo[52479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nclxdoxhqxhmeuxnjgipmgzlhylofivl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156459.017384-179-71892185327329/AnsiballZ_service_facts.py'
Oct 11 04:20:59 compute-0 sudo[52479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:20:59 compute-0 python3.9[52481]: ansible-service_facts Invoked
Oct 11 04:20:59 compute-0 network[52498]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 04:20:59 compute-0 network[52499]: 'network-scripts' will be removed from distribution in near future.
Oct 11 04:20:59 compute-0 network[52500]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 04:21:04 compute-0 sudo[52479]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:05 compute-0 sudo[52785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aettzcoqdqkjsrepxhwfvkukialoubpx ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1760156465.099276-192-147119904499113/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1760156465.099276-192-147119904499113/args'
Oct 11 04:21:05 compute-0 sudo[52785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:05 compute-0 sudo[52785]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:06 compute-0 sudo[52952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krmlwppympkevpvnvojomclrkrtaieht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156465.9646747-203-167407649729070/AnsiballZ_dnf.py'
Oct 11 04:21:06 compute-0 sudo[52952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:06 compute-0 python3.9[52954]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:21:07 compute-0 sudo[52952]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:08 compute-0 sudo[53105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvaincawokqzwghcszxxolafdnldfwck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156468.2001545-216-20105470826521/AnsiballZ_package_facts.py'
Oct 11 04:21:08 compute-0 sudo[53105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:09 compute-0 python3.9[53107]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 11 04:21:09 compute-0 sudo[53105]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:10 compute-0 sudo[53257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqaazcejtusksksyenyqnhtxvxgfvisk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156469.954874-226-169685902593449/AnsiballZ_stat.py'
Oct 11 04:21:10 compute-0 sudo[53257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:10 compute-0 python3.9[53259]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:10 compute-0 sudo[53257]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:10 compute-0 sudo[53382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdjiheqixphutukirrmjtombcodhzekp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156469.954874-226-169685902593449/AnsiballZ_copy.py'
Oct 11 04:21:10 compute-0 sudo[53382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:11 compute-0 python3.9[53384]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760156469.954874-226-169685902593449/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:11 compute-0 sudo[53382]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:11 compute-0 sudo[53536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhdiezejlbimhoooelqpevicdzolfilp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156471.3810875-241-233130660666626/AnsiballZ_stat.py'
Oct 11 04:21:11 compute-0 sudo[53536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:11 compute-0 python3.9[53538]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:11 compute-0 sudo[53536]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:12 compute-0 sudo[53661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrsozlbpmiioqrhuclprlmnaxfiwpvvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156471.3810875-241-233130660666626/AnsiballZ_copy.py'
Oct 11 04:21:12 compute-0 sudo[53661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:12 compute-0 python3.9[53663]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760156471.3810875-241-233130660666626/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:12 compute-0 sudo[53661]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:13 compute-0 sudo[53815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scquzcsrwxmccgjygxqgwvspbquhqrqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156473.0757763-262-84354965226153/AnsiballZ_lineinfile.py'
Oct 11 04:21:13 compute-0 sudo[53815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:13 compute-0 python3.9[53817]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:13 compute-0 sudo[53815]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:14 compute-0 sudo[53969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqzmrmvgntqkmwbgqfshkumggpevszds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156474.3560758-277-59214457321590/AnsiballZ_setup.py'
Oct 11 04:21:14 compute-0 sudo[53969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:15 compute-0 python3.9[53971]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:21:15 compute-0 sudo[53969]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:15 compute-0 sudo[54053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foaqhkyamcjbswgqiyknbcukayofcnvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156474.3560758-277-59214457321590/AnsiballZ_systemd.py'
Oct 11 04:21:15 compute-0 sudo[54053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:16 compute-0 python3.9[54055]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:21:16 compute-0 sudo[54053]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:17 compute-0 sudo[54207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjaxvoqfaggqjozhchhpjsjuyfswbkmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156476.6949303-293-162894255319291/AnsiballZ_setup.py'
Oct 11 04:21:17 compute-0 sudo[54207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:17 compute-0 python3.9[54209]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:21:17 compute-0 sudo[54207]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:17 compute-0 sudo[54291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqfhbutgyxycbmqfnkrqfkaucjhzpyqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156476.6949303-293-162894255319291/AnsiballZ_systemd.py'
Oct 11 04:21:17 compute-0 sudo[54291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:18 compute-0 python3.9[54293]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:21:19 compute-0 chronyd[814]: chronyd exiting
Oct 11 04:21:19 compute-0 systemd[1]: Stopping NTP client/server...
Oct 11 04:21:19 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Oct 11 04:21:19 compute-0 systemd[1]: Stopped NTP client/server.
Oct 11 04:21:19 compute-0 systemd[1]: Starting NTP client/server...
Oct 11 04:21:19 compute-0 chronyd[54302]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 11 04:21:19 compute-0 chronyd[54302]: Frequency -24.886 +/- 0.067 ppm read from /var/lib/chrony/drift
Oct 11 04:21:19 compute-0 chronyd[54302]: Loaded seccomp filter (level 2)
Oct 11 04:21:19 compute-0 systemd[1]: Started NTP client/server.
Oct 11 04:21:19 compute-0 sudo[54291]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:19 compute-0 sshd-session[49502]: Connection closed by 192.168.122.30 port 58256
Oct 11 04:21:19 compute-0 sshd-session[49499]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:21:19 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Oct 11 04:21:19 compute-0 systemd[1]: session-12.scope: Consumed 29.106s CPU time.
Oct 11 04:21:19 compute-0 systemd-logind[801]: Session 12 logged out. Waiting for processes to exit.
Oct 11 04:21:19 compute-0 systemd-logind[801]: Removed session 12.
Oct 11 04:21:24 compute-0 sshd-session[54328]: Accepted publickey for zuul from 192.168.122.30 port 59400 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:21:24 compute-0 systemd-logind[801]: New session 13 of user zuul.
Oct 11 04:21:24 compute-0 systemd[1]: Started Session 13 of User zuul.
Oct 11 04:21:24 compute-0 sshd-session[54328]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:21:25 compute-0 sudo[54481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqtbgsqjudnnximwvzbbbwtxienjwlmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156484.5326002-22-85411712355840/AnsiballZ_file.py'
Oct 11 04:21:25 compute-0 sudo[54481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:25 compute-0 python3.9[54483]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:25 compute-0 sudo[54481]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:26 compute-0 sudo[54633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paanjaqcnrplguoqkauxyewgwwhtqcdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156485.583502-34-138757896121052/AnsiballZ_stat.py'
Oct 11 04:21:26 compute-0 sudo[54633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:26 compute-0 python3.9[54635]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:26 compute-0 sudo[54633]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:26 compute-0 sudo[54756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtslblcskfgclpboheitfsqicngezfpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156485.583502-34-138757896121052/AnsiballZ_copy.py'
Oct 11 04:21:26 compute-0 sudo[54756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:27 compute-0 python3.9[54758]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760156485.583502-34-138757896121052/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:27 compute-0 sudo[54756]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:27 compute-0 sshd-session[54331]: Connection closed by 192.168.122.30 port 59400
Oct 11 04:21:27 compute-0 sshd-session[54328]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:21:27 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Oct 11 04:21:27 compute-0 systemd[1]: session-13.scope: Consumed 2.011s CPU time.
Oct 11 04:21:27 compute-0 systemd-logind[801]: Session 13 logged out. Waiting for processes to exit.
Oct 11 04:21:27 compute-0 systemd-logind[801]: Removed session 13.
Oct 11 04:21:32 compute-0 sshd-session[54783]: Accepted publickey for zuul from 192.168.122.30 port 45716 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:21:32 compute-0 systemd-logind[801]: New session 14 of user zuul.
Oct 11 04:21:32 compute-0 systemd[1]: Started Session 14 of User zuul.
Oct 11 04:21:32 compute-0 sshd-session[54783]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:21:33 compute-0 python3.9[54936]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:21:34 compute-0 sudo[55090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ictulegfxmcnjarjkynwzhcufkyqpfuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156494.1132438-33-204388854502002/AnsiballZ_file.py'
Oct 11 04:21:34 compute-0 sudo[55090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:34 compute-0 python3.9[55092]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:34 compute-0 sudo[55090]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:35 compute-0 sudo[55265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrvrwwcatizzixnynniewosiupqpuwlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156495.0201948-41-248734547145797/AnsiballZ_stat.py'
Oct 11 04:21:35 compute-0 sudo[55265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:35 compute-0 python3.9[55267]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:35 compute-0 sudo[55265]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:36 compute-0 sudo[55388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqvszfwcfuspigdadkqorsvhkjhgirlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156495.0201948-41-248734547145797/AnsiballZ_copy.py'
Oct 11 04:21:36 compute-0 sudo[55388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:36 compute-0 python3.9[55390]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1760156495.0201948-41-248734547145797/.source.json _original_basename=.4_kugxi8 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:36 compute-0 sudo[55388]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:37 compute-0 sudo[55540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebkdchuzvdkuykvltulrsxavprfwzjqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156496.8624575-64-64311966698386/AnsiballZ_stat.py'
Oct 11 04:21:37 compute-0 sudo[55540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:37 compute-0 python3.9[55542]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:37 compute-0 sudo[55540]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:37 compute-0 sudo[55663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttfcerhwcoeghwrvkgtuboorgzrlczij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156496.8624575-64-64311966698386/AnsiballZ_copy.py'
Oct 11 04:21:37 compute-0 sudo[55663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:37 compute-0 python3.9[55665]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760156496.8624575-64-64311966698386/.source _original_basename=.oj83kj3g follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:37 compute-0 sudo[55663]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:38 compute-0 sudo[55815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nskqusplacmzelzpeeuahkskeqopqmaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156498.2021883-80-267811896479795/AnsiballZ_file.py'
Oct 11 04:21:38 compute-0 sudo[55815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:38 compute-0 python3.9[55817]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:21:38 compute-0 sudo[55815]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:39 compute-0 sudo[55967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjxlbdavzqvtzxaoahneirkgavmhpuhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156498.8761668-88-206247981775093/AnsiballZ_stat.py'
Oct 11 04:21:39 compute-0 sudo[55967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:39 compute-0 python3.9[55969]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:39 compute-0 sudo[55967]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:39 compute-0 sudo[56090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfloehduvhonzdhhwmbmcfripdofqkue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156498.8761668-88-206247981775093/AnsiballZ_copy.py'
Oct 11 04:21:39 compute-0 sudo[56090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:39 compute-0 python3.9[56092]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760156498.8761668-88-206247981775093/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:21:40 compute-0 sudo[56090]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:40 compute-0 sudo[56242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyqscssnosguvqrvrhadkdrhbvxzkdhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156500.1354647-88-180606667444336/AnsiballZ_stat.py'
Oct 11 04:21:40 compute-0 sudo[56242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:40 compute-0 python3.9[56244]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:40 compute-0 sudo[56242]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:41 compute-0 sudo[56365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huziyjhlcnnluiiaitrcaqnmwweyncvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156500.1354647-88-180606667444336/AnsiballZ_copy.py'
Oct 11 04:21:41 compute-0 sudo[56365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:41 compute-0 python3.9[56367]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760156500.1354647-88-180606667444336/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:21:41 compute-0 sudo[56365]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:41 compute-0 sudo[56517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dugktcrjqlvpqzgwyncpqmddupxbindp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156501.379924-117-221413411684531/AnsiballZ_file.py'
Oct 11 04:21:41 compute-0 sudo[56517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:41 compute-0 python3.9[56519]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:41 compute-0 sudo[56517]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:42 compute-0 sudo[56669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhhncoskufqvhnskcetkjrakawfntiaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156502.043643-125-163699013264047/AnsiballZ_stat.py'
Oct 11 04:21:42 compute-0 sudo[56669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:42 compute-0 python3.9[56671]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:42 compute-0 sudo[56669]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:42 compute-0 sudo[56792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deukksvrnruokzipfjbzggrbioozjbqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156502.043643-125-163699013264047/AnsiballZ_copy.py'
Oct 11 04:21:42 compute-0 sudo[56792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:43 compute-0 python3.9[56794]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760156502.043643-125-163699013264047/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:43 compute-0 sudo[56792]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:43 compute-0 sudo[56944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgfpqwrmumxawsrwjmguoltykswtdtun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156503.2876656-140-234725819070360/AnsiballZ_stat.py'
Oct 11 04:21:43 compute-0 sudo[56944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:43 compute-0 python3.9[56946]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:43 compute-0 sudo[56944]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:44 compute-0 sudo[57067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnuayvoepblslxkwmuxdgzgjxhoclrhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156503.2876656-140-234725819070360/AnsiballZ_copy.py'
Oct 11 04:21:44 compute-0 sudo[57067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:44 compute-0 python3.9[57069]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760156503.2876656-140-234725819070360/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:44 compute-0 sudo[57067]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:45 compute-0 sudo[57219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwezluqzuyzxkfmcklrysxzfvwydyahi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156504.55124-155-136658338436532/AnsiballZ_systemd.py'
Oct 11 04:21:45 compute-0 sudo[57219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:45 compute-0 python3.9[57221]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:21:45 compute-0 systemd[1]: Reloading.
Oct 11 04:21:45 compute-0 systemd-sysv-generator[57247]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:21:45 compute-0 systemd-rc-local-generator[57241]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:21:45 compute-0 systemd[1]: Reloading.
Oct 11 04:21:45 compute-0 systemd-rc-local-generator[57284]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:21:45 compute-0 systemd-sysv-generator[57289]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:21:45 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Oct 11 04:21:45 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Oct 11 04:21:46 compute-0 sudo[57219]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:46 compute-0 sudo[57445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tddxsvlxgfeymldvcogjujbpftegmekm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156506.1925485-163-274268892967710/AnsiballZ_stat.py'
Oct 11 04:21:46 compute-0 sudo[57445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:46 compute-0 python3.9[57447]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:46 compute-0 sudo[57445]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:47 compute-0 sudo[57568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpxwuaitbnclixmbqcahfjdsqjtsvrym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156506.1925485-163-274268892967710/AnsiballZ_copy.py'
Oct 11 04:21:47 compute-0 sudo[57568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:47 compute-0 python3.9[57570]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760156506.1925485-163-274268892967710/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:47 compute-0 sudo[57568]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:47 compute-0 sudo[57720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qacgxjfovishwgcyblubqrycgtgeqxms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156507.5580006-178-175887765066949/AnsiballZ_stat.py'
Oct 11 04:21:47 compute-0 sudo[57720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:48 compute-0 python3.9[57722]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:48 compute-0 sudo[57720]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:48 compute-0 sudo[57843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wurohwnuoebxnzrljrvlbhgtrpgfwhlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156507.5580006-178-175887765066949/AnsiballZ_copy.py'
Oct 11 04:21:48 compute-0 sudo[57843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:48 compute-0 python3.9[57845]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760156507.5580006-178-175887765066949/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:48 compute-0 sudo[57843]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:49 compute-0 sudo[57995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awuodduziloybvffxckapfibqjbwojaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156508.9962223-193-110163193196550/AnsiballZ_systemd.py'
Oct 11 04:21:49 compute-0 sudo[57995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:49 compute-0 python3.9[57997]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:21:49 compute-0 systemd[1]: Reloading.
Oct 11 04:21:49 compute-0 systemd-rc-local-generator[58025]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:21:49 compute-0 systemd-sysv-generator[58029]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:21:50 compute-0 systemd[1]: Reloading.
Oct 11 04:21:50 compute-0 systemd-rc-local-generator[58062]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:21:50 compute-0 systemd-sysv-generator[58065]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:21:50 compute-0 systemd[1]: Starting Create netns directory...
Oct 11 04:21:50 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 04:21:50 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 04:21:50 compute-0 systemd[1]: Finished Create netns directory.
Oct 11 04:21:50 compute-0 sudo[57995]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:51 compute-0 python3.9[58224]: ansible-ansible.builtin.service_facts Invoked
Oct 11 04:21:51 compute-0 network[58241]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 04:21:51 compute-0 network[58242]: 'network-scripts' will be removed from distribution in near future.
Oct 11 04:21:51 compute-0 network[58243]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 04:21:56 compute-0 sudo[58505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjkvdtwzswgrqgkzsgxnfjoqijucnrjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156515.9452155-209-16585551819966/AnsiballZ_systemd.py'
Oct 11 04:21:56 compute-0 sudo[58505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:56 compute-0 python3.9[58507]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:21:56 compute-0 systemd[1]: Reloading.
Oct 11 04:21:56 compute-0 systemd-rc-local-generator[58540]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:21:56 compute-0 systemd-sysv-generator[58543]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:21:56 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Oct 11 04:21:57 compute-0 iptables.init[58547]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct 11 04:21:57 compute-0 iptables.init[58547]: iptables: Flushing firewall rules: [  OK  ]
Oct 11 04:21:57 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Oct 11 04:21:57 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Oct 11 04:21:57 compute-0 sudo[58505]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:57 compute-0 sudo[58741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpmrrfwlrwkqmtuohunzyatwdikifmhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156517.5477471-209-50114434475660/AnsiballZ_systemd.py'
Oct 11 04:21:57 compute-0 sudo[58741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:58 compute-0 python3.9[58743]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:21:58 compute-0 sudo[58741]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:58 compute-0 sudo[58895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feffqoemoinmzqklxpzhimmxkgxydcnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156518.5331779-225-133426157736908/AnsiballZ_systemd.py'
Oct 11 04:21:58 compute-0 sudo[58895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:21:59 compute-0 python3.9[58897]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:21:59 compute-0 systemd[1]: Reloading.
Oct 11 04:21:59 compute-0 systemd-rc-local-generator[58926]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:21:59 compute-0 systemd-sysv-generator[58929]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:21:59 compute-0 systemd[1]: Starting Netfilter Tables...
Oct 11 04:21:59 compute-0 systemd[1]: Finished Netfilter Tables.
Oct 11 04:21:59 compute-0 sudo[58895]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:59 compute-0 sshd[1006]: Timeout before authentication for connection from 43.224.126.107 to 38.102.83.148, pid = 45108
Oct 11 04:22:00 compute-0 sudo[59087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xidvcbcwyyrjjetcbfeihshlrjklcxez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156519.756543-233-201365909416714/AnsiballZ_command.py'
Oct 11 04:22:00 compute-0 sudo[59087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:00 compute-0 python3.9[59089]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:22:00 compute-0 sudo[59087]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:01 compute-0 sudo[59240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tolihnsoduqnsrlaiitrqxjfqpurxkty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156521.0064254-247-26890070424650/AnsiballZ_stat.py'
Oct 11 04:22:01 compute-0 sudo[59240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:01 compute-0 python3.9[59242]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:01 compute-0 sudo[59240]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:02 compute-0 sudo[59365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhqrkzmhcddvkhvdakzzqvazzewphwod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156521.0064254-247-26890070424650/AnsiballZ_copy.py'
Oct 11 04:22:02 compute-0 sudo[59365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:02 compute-0 python3.9[59367]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760156521.0064254-247-26890070424650/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:22:02 compute-0 sudo[59365]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:03 compute-0 python3.9[59518]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:22:03 compute-0 polkitd[6176]: Registered Authentication Agent for unix-process:59520:270332 (system bus name :1.524 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 11 04:22:28 compute-0 polkitd[6176]: Unregistered Authentication Agent for unix-process:59520:270332 (system bus name :1.524, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 11 04:22:28 compute-0 polkit-agent-helper-1[59532]: pam_unix(polkit-1:auth): conversation failed
Oct 11 04:22:28 compute-0 polkit-agent-helper-1[59532]: pam_unix(polkit-1:auth): auth could not identify password for [root]
Oct 11 04:22:28 compute-0 polkitd[6176]: Operator of unix-process:59520:270332 FAILED to authenticate to gain authorization for action org.freedesktop.systemd1.manage-units for system-bus-name::1.523 [<unknown>] (owned by unix-user:zuul)
Oct 11 04:22:28 compute-0 sshd-session[54786]: Connection closed by 192.168.122.30 port 45716
Oct 11 04:22:28 compute-0 sshd-session[54783]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:22:28 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Oct 11 04:22:28 compute-0 systemd[1]: session-14.scope: Consumed 22.097s CPU time.
Oct 11 04:22:28 compute-0 systemd-logind[801]: Session 14 logged out. Waiting for processes to exit.
Oct 11 04:22:28 compute-0 systemd-logind[801]: Removed session 14.
Oct 11 04:22:40 compute-0 sshd-session[59558]: Accepted publickey for zuul from 192.168.122.30 port 37180 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:22:40 compute-0 systemd-logind[801]: New session 15 of user zuul.
Oct 11 04:22:40 compute-0 systemd[1]: Started Session 15 of User zuul.
Oct 11 04:22:40 compute-0 sshd-session[59558]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:22:42 compute-0 python3.9[59711]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:22:43 compute-0 sudo[59865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qirxkvlikjjbfzpeporjfjyxzpdpirnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156562.6335328-33-95630163337476/AnsiballZ_file.py'
Oct 11 04:22:43 compute-0 sudo[59865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:43 compute-0 python3.9[59867]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:22:43 compute-0 sudo[59865]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:44 compute-0 sudo[60040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mczumixiewkpivlxzhqcpnmagbvzmgzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156563.5711713-41-6669289747742/AnsiballZ_stat.py'
Oct 11 04:22:44 compute-0 sudo[60040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:44 compute-0 python3.9[60042]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:44 compute-0 sudo[60040]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:44 compute-0 sudo[60118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdeajouroiuxajfsdcliealyrrytmesv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156563.5711713-41-6669289747742/AnsiballZ_file.py'
Oct 11 04:22:44 compute-0 sudo[60118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:44 compute-0 python3.9[60120]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.9d_cj3m2 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:22:45 compute-0 sudo[60118]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:45 compute-0 sudo[60270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxtzokgqdqmawjmxrvrcgtdlenjkfeyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156565.3613775-61-222941633423895/AnsiballZ_stat.py'
Oct 11 04:22:45 compute-0 sudo[60270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:46 compute-0 python3.9[60272]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:46 compute-0 sudo[60270]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:46 compute-0 sudo[60348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ektqnratfcizhltjpimddyttydahxvub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156565.3613775-61-222941633423895/AnsiballZ_file.py'
Oct 11 04:22:46 compute-0 sudo[60348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:46 compute-0 python3.9[60350]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.isaumn6j recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:22:46 compute-0 sudo[60348]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:47 compute-0 sudo[60500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uahbqzxjxuhpduvfskckfwsyqtuvoegs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156566.7600343-74-184780230321872/AnsiballZ_file.py'
Oct 11 04:22:47 compute-0 sudo[60500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:47 compute-0 python3.9[60502]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:22:47 compute-0 sudo[60500]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:47 compute-0 sudo[60652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujavuprvhpnbakrmbxntlfwkimoqxmew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156567.5193527-82-224037831749176/AnsiballZ_stat.py'
Oct 11 04:22:47 compute-0 sudo[60652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:48 compute-0 python3.9[60654]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:48 compute-0 sudo[60652]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:48 compute-0 sudo[60730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smrxdjodmxxpyoeircuzretlihqvqqbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156567.5193527-82-224037831749176/AnsiballZ_file.py'
Oct 11 04:22:48 compute-0 sudo[60730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:48 compute-0 python3.9[60732]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:22:48 compute-0 sudo[60730]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:49 compute-0 sudo[60882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvoxafnbfqsuzqyakiccdxsjhssrrnus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156568.844359-82-100997681088573/AnsiballZ_stat.py'
Oct 11 04:22:49 compute-0 sudo[60882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:49 compute-0 python3.9[60884]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:49 compute-0 sudo[60882]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:49 compute-0 sudo[60960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lryddmxhnxgmsifsgxffyrzkzltnetza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156568.844359-82-100997681088573/AnsiballZ_file.py'
Oct 11 04:22:49 compute-0 sudo[60960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:49 compute-0 python3.9[60962]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:22:49 compute-0 sudo[60960]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:50 compute-0 sudo[61112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjqvejppynxpiiagmeampxxcdghqvlis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156570.1043434-105-153047821272989/AnsiballZ_file.py'
Oct 11 04:22:50 compute-0 sudo[61112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:50 compute-0 python3.9[61114]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:22:50 compute-0 sudo[61112]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:51 compute-0 sudo[61264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxooofqxzhiaxzftoindtudyhmckozdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156570.8782341-113-231782531947183/AnsiballZ_stat.py'
Oct 11 04:22:51 compute-0 sudo[61264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:51 compute-0 python3.9[61266]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:51 compute-0 sudo[61264]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:51 compute-0 sudo[61342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwwjdosjbstwrpuikyvaoemxmcgzvibh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156570.8782341-113-231782531947183/AnsiballZ_file.py'
Oct 11 04:22:51 compute-0 sudo[61342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:52 compute-0 python3.9[61344]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:22:52 compute-0 sudo[61342]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:52 compute-0 sudo[61494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hycxpaahyhyrgrcbtyfcrihhgcnstihr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156572.2598681-125-195900961373894/AnsiballZ_stat.py'
Oct 11 04:22:52 compute-0 sudo[61494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:52 compute-0 python3.9[61496]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:52 compute-0 sudo[61494]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:53 compute-0 sudo[61572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hejbrixilveszygwdhjteapxzwqbqxut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156572.2598681-125-195900961373894/AnsiballZ_file.py'
Oct 11 04:22:53 compute-0 sudo[61572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:53 compute-0 python3.9[61574]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:22:53 compute-0 sudo[61572]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:54 compute-0 sudo[61724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afgaelxdlzrtoqnhkqcoxtinxlmaefvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156573.5225065-137-145839582621151/AnsiballZ_systemd.py'
Oct 11 04:22:54 compute-0 sudo[61724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:54 compute-0 python3.9[61726]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:22:54 compute-0 systemd[1]: Reloading.
Oct 11 04:22:54 compute-0 systemd-rc-local-generator[61747]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:22:54 compute-0 systemd-sysv-generator[61753]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:22:54 compute-0 sudo[61724]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:55 compute-0 sudo[61913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwqtkxvraympadabtqkgkjxyozajoznn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156574.8353782-145-94586177350653/AnsiballZ_stat.py'
Oct 11 04:22:55 compute-0 sudo[61913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:55 compute-0 python3.9[61915]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:55 compute-0 sudo[61913]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:55 compute-0 sudo[61991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxhnhfbnvishcidhfmyrvsidldbgkeso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156574.8353782-145-94586177350653/AnsiballZ_file.py'
Oct 11 04:22:55 compute-0 sudo[61991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:55 compute-0 python3.9[61993]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:22:55 compute-0 sudo[61991]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:56 compute-0 sudo[62143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybezyuyigrgvcpempcemikzddkoijrfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156576.0927372-157-256094857959912/AnsiballZ_stat.py'
Oct 11 04:22:56 compute-0 sudo[62143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:56 compute-0 python3.9[62145]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:56 compute-0 sudo[62143]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:57 compute-0 sudo[62221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcpuknmgtqcdtjnjhkzwmzjbyppwiwiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156576.0927372-157-256094857959912/AnsiballZ_file.py'
Oct 11 04:22:57 compute-0 sudo[62221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:57 compute-0 python3.9[62223]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:22:57 compute-0 sudo[62221]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:57 compute-0 sudo[62373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udzfhvpxbohsfdwygkmwvxmvwgjjvfmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156577.4880722-169-241670158914905/AnsiballZ_systemd.py'
Oct 11 04:22:57 compute-0 sudo[62373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:22:58 compute-0 python3.9[62375]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:22:58 compute-0 systemd[1]: Reloading.
Oct 11 04:22:58 compute-0 systemd-rc-local-generator[62401]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:22:58 compute-0 systemd-sysv-generator[62407]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:22:58 compute-0 systemd[1]: Starting Create netns directory...
Oct 11 04:22:58 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 04:22:58 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 04:22:58 compute-0 systemd[1]: Finished Create netns directory.
Oct 11 04:22:58 compute-0 sudo[62373]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:59 compute-0 python3.9[62566]: ansible-ansible.builtin.service_facts Invoked
Oct 11 04:22:59 compute-0 network[62583]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 04:22:59 compute-0 network[62584]: 'network-scripts' will be removed from distribution in near future.
Oct 11 04:22:59 compute-0 network[62585]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 04:23:03 compute-0 sudo[62846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yirdbgixnxgukfkgxzmlldltmsswotpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156583.5597715-195-96274253364081/AnsiballZ_stat.py'
Oct 11 04:23:03 compute-0 sudo[62846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:04 compute-0 python3.9[62848]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:23:04 compute-0 sudo[62846]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:04 compute-0 sudo[62924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpptcqmlgmsbxcgtdvlqaunucnktioor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156583.5597715-195-96274253364081/AnsiballZ_file.py'
Oct 11 04:23:04 compute-0 sudo[62924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:04 compute-0 python3.9[62926]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:04 compute-0 sudo[62924]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:05 compute-0 sudo[63076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omspzwpzemylxfbbjvxdnqbfzlichkbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156584.871644-208-148863145296807/AnsiballZ_file.py'
Oct 11 04:23:05 compute-0 sudo[63076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:05 compute-0 python3.9[63078]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:05 compute-0 sudo[63076]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:05 compute-0 sudo[63228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwgcnpayyisckiqgljpwojjzdkvtxnbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156585.6130896-216-273327074372260/AnsiballZ_stat.py'
Oct 11 04:23:05 compute-0 sudo[63228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:06 compute-0 python3.9[63230]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:23:06 compute-0 sudo[63228]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:06 compute-0 sudo[63351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxqkwllrsbqhvsghuhvnyqbnkupywnez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156585.6130896-216-273327074372260/AnsiballZ_copy.py'
Oct 11 04:23:06 compute-0 sudo[63351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:07 compute-0 python3.9[63353]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760156585.6130896-216-273327074372260/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:07 compute-0 sudo[63351]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:07 compute-0 sudo[63503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdkadrbdjkiteuwqediuuvyykzxcryyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156587.3547993-234-162230513457979/AnsiballZ_timezone.py'
Oct 11 04:23:07 compute-0 sudo[63503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:08 compute-0 python3.9[63505]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 11 04:23:08 compute-0 systemd[1]: Starting Time & Date Service...
Oct 11 04:23:08 compute-0 systemd[1]: Started Time & Date Service.
Oct 11 04:23:08 compute-0 sudo[63503]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:08 compute-0 sudo[63659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxzeilqsshpnhgthsdpuguwtpzcmdxar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156588.4798584-243-38009701360052/AnsiballZ_file.py'
Oct 11 04:23:08 compute-0 sudo[63659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:09 compute-0 python3.9[63661]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:09 compute-0 sudo[63659]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:09 compute-0 sudo[63811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etgyglsvrozrvivcovcjxdzytjilpdoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156589.2924993-251-249051823886885/AnsiballZ_stat.py'
Oct 11 04:23:09 compute-0 sudo[63811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:09 compute-0 python3.9[63813]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:23:09 compute-0 sudo[63811]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:10 compute-0 sudo[63934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcazzprfojjfcwsipbupdghsfdnjcrug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156589.2924993-251-249051823886885/AnsiballZ_copy.py'
Oct 11 04:23:10 compute-0 sudo[63934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:10 compute-0 python3.9[63936]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760156589.2924993-251-249051823886885/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:10 compute-0 sudo[63934]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:11 compute-0 sudo[64086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlzkfagskojxspwyzxsxjvvlmvygkkvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156590.7569735-266-208002509990817/AnsiballZ_stat.py'
Oct 11 04:23:11 compute-0 sudo[64086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:11 compute-0 python3.9[64088]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:23:11 compute-0 sudo[64086]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:11 compute-0 sudo[64209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvgflcarkykmtvyjsevfgzbjbdbiobko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156590.7569735-266-208002509990817/AnsiballZ_copy.py'
Oct 11 04:23:11 compute-0 sudo[64209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:11 compute-0 python3.9[64211]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760156590.7569735-266-208002509990817/.source.yaml _original_basename=.l8vcwt5o follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:11 compute-0 sudo[64209]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:12 compute-0 sudo[64361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygirtqrhswdmgtmknxyodqpcqywnropg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156592.1802073-281-33024122708830/AnsiballZ_stat.py'
Oct 11 04:23:12 compute-0 sudo[64361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:12 compute-0 python3.9[64363]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:23:12 compute-0 sudo[64361]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:13 compute-0 sudo[64484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edezhbwfvryqmngliunjxktffvzrezjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156592.1802073-281-33024122708830/AnsiballZ_copy.py'
Oct 11 04:23:13 compute-0 sudo[64484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:13 compute-0 python3.9[64486]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760156592.1802073-281-33024122708830/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:13 compute-0 sudo[64484]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:14 compute-0 sudo[64636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdmttwxmdjsioqvaimmfukppytcvgtkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156593.6567216-296-28933681126786/AnsiballZ_command.py'
Oct 11 04:23:14 compute-0 sudo[64636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:14 compute-0 python3.9[64638]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:23:14 compute-0 sudo[64636]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:14 compute-0 sudo[64789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szybhvluvrlcpogesheotwoiymwmniqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156594.5908456-304-270874846436498/AnsiballZ_command.py'
Oct 11 04:23:14 compute-0 sudo[64789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:15 compute-0 python3.9[64791]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:23:15 compute-0 sudo[64789]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:15 compute-0 sudo[64942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epgrmlhrjqzcbpxivczgnoufncrpynmx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760156595.4027572-312-259388603711645/AnsiballZ_edpm_nftables_from_files.py'
Oct 11 04:23:15 compute-0 sudo[64942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:16 compute-0 python3[64944]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 11 04:23:16 compute-0 sudo[64942]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:16 compute-0 sudo[65094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rabmnewqfjrxmlpdknuociybvznvtift ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156596.3735516-320-32398170968179/AnsiballZ_stat.py'
Oct 11 04:23:16 compute-0 sudo[65094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:16 compute-0 python3.9[65096]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:23:16 compute-0 sudo[65094]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:17 compute-0 sudo[65217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngsjsfjobtmptepxzkytlclwwuwxyaov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156596.3735516-320-32398170968179/AnsiballZ_copy.py'
Oct 11 04:23:17 compute-0 sudo[65217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:17 compute-0 python3.9[65219]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760156596.3735516-320-32398170968179/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:17 compute-0 sudo[65217]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:18 compute-0 sudo[65369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbhogzgjudugzxqbxrzwqvnhvmpuqtbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156597.6611276-335-172850410846074/AnsiballZ_stat.py'
Oct 11 04:23:18 compute-0 sudo[65369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:18 compute-0 python3.9[65371]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:23:18 compute-0 sudo[65369]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:18 compute-0 sudo[65492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuslludaacillbwxkvspbwchrskjemmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156597.6611276-335-172850410846074/AnsiballZ_copy.py'
Oct 11 04:23:18 compute-0 sudo[65492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:18 compute-0 python3.9[65494]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760156597.6611276-335-172850410846074/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:18 compute-0 sudo[65492]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:19 compute-0 sudo[65644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnsupccvqrxvnqbctfqvvecwdokhbngo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156598.9702613-350-72156928912543/AnsiballZ_stat.py'
Oct 11 04:23:19 compute-0 sudo[65644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:19 compute-0 python3.9[65646]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:23:19 compute-0 sudo[65644]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:19 compute-0 sudo[65767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htmvwrpvdpefkmbgxrpzopndbvzwueha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156598.9702613-350-72156928912543/AnsiballZ_copy.py'
Oct 11 04:23:19 compute-0 sudo[65767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:20 compute-0 python3.9[65769]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760156598.9702613-350-72156928912543/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:20 compute-0 sudo[65767]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:20 compute-0 sudo[65919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itountigeajaungpcdhbgzgonulazmzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156600.3342962-365-273853266239995/AnsiballZ_stat.py'
Oct 11 04:23:20 compute-0 sudo[65919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:21 compute-0 python3.9[65921]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:23:21 compute-0 sudo[65919]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:21 compute-0 sudo[66042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsuiqrpnvvdlvsabfaratcdsoznvjwjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156600.3342962-365-273853266239995/AnsiballZ_copy.py'
Oct 11 04:23:21 compute-0 sudo[66042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:21 compute-0 python3.9[66044]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760156600.3342962-365-273853266239995/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:21 compute-0 sudo[66042]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:22 compute-0 sudo[66194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brakfpbdlaldxrdoucvegvcqmzcywuvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156601.975913-380-103182756048607/AnsiballZ_stat.py'
Oct 11 04:23:22 compute-0 sudo[66194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:22 compute-0 python3.9[66196]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:23:22 compute-0 sudo[66194]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:23 compute-0 sudo[66317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwtjugsigcxtceifiawwotwgxxfgphyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156601.975913-380-103182756048607/AnsiballZ_copy.py'
Oct 11 04:23:23 compute-0 sudo[66317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:23 compute-0 python3.9[66319]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760156601.975913-380-103182756048607/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:23 compute-0 sudo[66317]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:23 compute-0 sudo[66469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmymrgbmiwfhptmrmfywihbtcsfzboqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156603.443835-395-93609078045091/AnsiballZ_file.py'
Oct 11 04:23:23 compute-0 sudo[66469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:24 compute-0 python3.9[66471]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:24 compute-0 sudo[66469]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:24 compute-0 sudo[66621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzjzwlsowwqpryroxxehiaryamhmybtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156604.2810051-403-24276012390475/AnsiballZ_command.py'
Oct 11 04:23:24 compute-0 sudo[66621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:24 compute-0 python3.9[66623]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:23:24 compute-0 sudo[66621]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:25 compute-0 sudo[66780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogizoccfjhcgpentpgslhfbkuhwzvdhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156605.108977-411-64412658703533/AnsiballZ_blockinfile.py'
Oct 11 04:23:25 compute-0 sudo[66780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:25 compute-0 python3.9[66782]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:25 compute-0 sudo[66780]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:26 compute-0 sudo[66933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzzrbnblxabsherglshkvkbnfufjxpib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156606.1481924-420-192564208578042/AnsiballZ_file.py'
Oct 11 04:23:26 compute-0 sudo[66933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:26 compute-0 python3.9[66935]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:26 compute-0 sudo[66933]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:27 compute-0 sudo[67085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alsswulavpcjtxvhcyicmdylzotsdktk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156606.7618322-420-39169157954527/AnsiballZ_file.py'
Oct 11 04:23:27 compute-0 sudo[67085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:27 compute-0 python3.9[67087]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:27 compute-0 sudo[67085]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:28 compute-0 sudo[67237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcpmycozsirlcpuvlhpvxhkmzoweuxxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156607.4910777-435-143375301411580/AnsiballZ_mount.py'
Oct 11 04:23:28 compute-0 sudo[67237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:28 compute-0 chronyd[54302]: Selected source 216.232.132.102 (pool.ntp.org)
Oct 11 04:23:28 compute-0 python3.9[67239]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 11 04:23:28 compute-0 sudo[67237]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:28 compute-0 sudo[67390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oahssmvamnfiqgvyowofxpdietenrbkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156608.3587039-435-174651474806837/AnsiballZ_mount.py'
Oct 11 04:23:28 compute-0 sudo[67390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:29 compute-0 python3.9[67392]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 11 04:23:29 compute-0 sudo[67390]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:29 compute-0 sshd-session[59561]: Connection closed by 192.168.122.30 port 37180
Oct 11 04:23:29 compute-0 sshd-session[59558]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:23:29 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Oct 11 04:23:29 compute-0 systemd[1]: session-15.scope: Consumed 37.243s CPU time.
Oct 11 04:23:29 compute-0 systemd-logind[801]: Session 15 logged out. Waiting for processes to exit.
Oct 11 04:23:29 compute-0 systemd-logind[801]: Removed session 15.
Oct 11 04:23:34 compute-0 sshd-session[67419]: Accepted publickey for zuul from 192.168.122.30 port 33740 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:23:34 compute-0 systemd-logind[801]: New session 16 of user zuul.
Oct 11 04:23:34 compute-0 systemd[1]: Started Session 16 of User zuul.
Oct 11 04:23:34 compute-0 sshd-session[67419]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:23:35 compute-0 sudo[67572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zokswdhjhktofivfxrshouqldhwppuyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156614.9658537-16-236649899128862/AnsiballZ_tempfile.py'
Oct 11 04:23:35 compute-0 sudo[67572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:35 compute-0 python3.9[67574]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 11 04:23:35 compute-0 sudo[67572]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:36 compute-0 sudo[67724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgrlsrchzrwxawcwknoonvtzwimriboo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156615.8931139-28-232765212418694/AnsiballZ_stat.py'
Oct 11 04:23:36 compute-0 sudo[67724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:36 compute-0 python3.9[67726]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:23:36 compute-0 sudo[67724]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:37 compute-0 sudo[67876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-appisdzixhqhyjjhmfssorhdmdnjtvvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156616.8699925-38-212272753871034/AnsiballZ_setup.py'
Oct 11 04:23:37 compute-0 sudo[67876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:37 compute-0 python3.9[67878]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:23:37 compute-0 sudo[67876]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:38 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 11 04:23:38 compute-0 sudo[68030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxixrgpydcflwehufjuehsvwbgazrtge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156618.0884688-47-31674270342831/AnsiballZ_blockinfile.py'
Oct 11 04:23:38 compute-0 sudo[68030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:38 compute-0 python3.9[68032]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC98s6nG+nIFTH4d2pwEixa+DJEWOJ4blLMWa84WWX9ZzJX1vuA0e9J5uq6cH97m3O+InK1clAUjiVvvB7bYmuvrBCOXbOk7Hcq1CqYZe09e/jvGf8rsnAKJ50XqoIs9MsM9wzJnMPYBXRSTZEwgna4bcfIEyGg6C51MV4UkYlkXQtLQM4FkjcLwHgW1Gyr6vbc6yeKAl4kAxhgFKYlMGk5sWvV8yJ/SkMQyfjcTg9BqHEE5zDDU6893EPNAs+SK0NAR6OxpLhYHOLZJNPwtJh9awGVyIevc6TaXcoKDAi6bo6gQdBNNyGqgHOixvhHJRc6DVHHGLLEDHFdIbK2DpzrhuwAGuRjr1ab2VGI0eGz0ZAaOOsdG/N1nj08Gu2Ns7NelYH4PzBs+AA3e71Fo9z6GqEibMwJh/rVE73Qk0ihF0oltKiNLvdBnxdcTbHVc1bCjoW7qpqv/+8YmedxgmaXL0No8qXpTNTV/JC0S307AE6yIUpSl8jAzhFWPYUefLM=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN/DSXtcK2e0dGGdt91oDzWKSAIegFjTFcuab+G+SEv4
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM9xmo/d3xsMMtMrdBqm2I5UMewH3ZXERUGx/kC0Q3DIPzbn2sLVYLCJiUqzQvRwQaqqa+IS4GYn44enOiRErgI=
                                             create=True mode=0644 path=/tmp/ansible.yk48c6ku state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:38 compute-0 sudo[68030]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:39 compute-0 sudo[68182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhkvsuhyobujiibchlsyyrepaqblukhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156618.9979796-55-118585091930656/AnsiballZ_command.py'
Oct 11 04:23:39 compute-0 sudo[68182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:39 compute-0 python3.9[68184]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.yk48c6ku' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:23:39 compute-0 sudo[68182]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:40 compute-0 sudo[68336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vllxudejmgdttaueezwvwnywkfdgdzsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156619.9367235-63-60366629626145/AnsiballZ_file.py'
Oct 11 04:23:40 compute-0 sudo[68336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:40 compute-0 python3.9[68338]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.yk48c6ku state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:40 compute-0 sudo[68336]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:41 compute-0 sshd-session[67422]: Connection closed by 192.168.122.30 port 33740
Oct 11 04:23:41 compute-0 sshd-session[67419]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:23:41 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Oct 11 04:23:41 compute-0 systemd[1]: session-16.scope: Consumed 4.199s CPU time.
Oct 11 04:23:41 compute-0 systemd-logind[801]: Session 16 logged out. Waiting for processes to exit.
Oct 11 04:23:41 compute-0 systemd-logind[801]: Removed session 16.
Oct 11 04:23:45 compute-0 sshd-session[68363]: Accepted publickey for zuul from 192.168.122.30 port 44698 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:23:45 compute-0 systemd-logind[801]: New session 17 of user zuul.
Oct 11 04:23:45 compute-0 systemd[1]: Started Session 17 of User zuul.
Oct 11 04:23:45 compute-0 sshd-session[68363]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:23:46 compute-0 python3.9[68516]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:23:47 compute-0 sudo[68670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wknkehtfvdeoztkbisxohrjkihylgpfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156627.233022-32-135667622673843/AnsiballZ_systemd.py'
Oct 11 04:23:47 compute-0 sudo[68670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:48 compute-0 python3.9[68672]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 11 04:23:48 compute-0 sudo[68670]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:48 compute-0 sudo[68824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfzarwmdjzvxwampvfdpjoeeytigfpbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156628.446256-40-140290151026821/AnsiballZ_systemd.py'
Oct 11 04:23:48 compute-0 sudo[68824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:49 compute-0 python3.9[68826]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:23:49 compute-0 sudo[68824]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:49 compute-0 sudo[68977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbekdwcypzyouuwlttydhojqptrxvfpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156629.3157415-49-228461266691095/AnsiballZ_command.py'
Oct 11 04:23:49 compute-0 sudo[68977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:50 compute-0 python3.9[68979]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:23:50 compute-0 sudo[68977]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:50 compute-0 sudo[69130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdkqlfmowpcojdmkeszhftwsqvvxdyhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156630.2258391-57-128171788734075/AnsiballZ_stat.py'
Oct 11 04:23:50 compute-0 sudo[69130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:50 compute-0 python3.9[69132]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:23:50 compute-0 sudo[69130]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:51 compute-0 sudo[69284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aopwyxnivoxavwsjggbkfhwcxnzkwzof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156631.1487834-65-89944433260227/AnsiballZ_command.py'
Oct 11 04:23:51 compute-0 sudo[69284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:51 compute-0 python3.9[69286]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:23:51 compute-0 sudo[69284]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:52 compute-0 sudo[69439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldvwuhxkgfbqqnancscfmbkpzjhxgqmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156631.8428016-73-138570033166758/AnsiballZ_file.py'
Oct 11 04:23:52 compute-0 sudo[69439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:52 compute-0 python3.9[69441]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:52 compute-0 sudo[69439]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:52 compute-0 sshd-session[68366]: Connection closed by 192.168.122.30 port 44698
Oct 11 04:23:52 compute-0 sshd-session[68363]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:23:52 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Oct 11 04:23:52 compute-0 systemd[1]: session-17.scope: Consumed 5.032s CPU time.
Oct 11 04:23:52 compute-0 systemd-logind[801]: Session 17 logged out. Waiting for processes to exit.
Oct 11 04:23:52 compute-0 systemd-logind[801]: Removed session 17.
Oct 11 04:23:57 compute-0 sshd-session[69467]: Accepted publickey for zuul from 192.168.122.30 port 34274 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:23:57 compute-0 systemd-logind[801]: New session 18 of user zuul.
Oct 11 04:23:57 compute-0 systemd[1]: Started Session 18 of User zuul.
Oct 11 04:23:57 compute-0 sshd-session[69467]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:23:58 compute-0 python3.9[69620]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:23:59 compute-0 sudo[69774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlpwjvobkixtsukwyqbharisjrzptflv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156639.2854536-34-207730341767449/AnsiballZ_setup.py'
Oct 11 04:23:59 compute-0 sudo[69774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:23:59 compute-0 python3.9[69776]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:24:00 compute-0 sudo[69774]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:00 compute-0 sudo[69858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrrqiqjkqupdzjdmzeqvsvwxtfsvfuqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156639.2854536-34-207730341767449/AnsiballZ_dnf.py'
Oct 11 04:24:00 compute-0 sudo[69858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:00 compute-0 python3.9[69860]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 11 04:24:02 compute-0 sudo[69858]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:02 compute-0 python3.9[70011]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:24:04 compute-0 python3.9[70162]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 04:24:05 compute-0 python3.9[70312]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:24:05 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:24:05 compute-0 python3.9[70463]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:24:06 compute-0 sshd-session[69470]: Connection closed by 192.168.122.30 port 34274
Oct 11 04:24:06 compute-0 sshd-session[69467]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:24:06 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Oct 11 04:24:06 compute-0 systemd[1]: session-18.scope: Consumed 6.430s CPU time.
Oct 11 04:24:06 compute-0 systemd-logind[801]: Session 18 logged out. Waiting for processes to exit.
Oct 11 04:24:06 compute-0 systemd-logind[801]: Removed session 18.
Oct 11 04:24:13 compute-0 sshd-session[70488]: Accepted publickey for zuul from 38.102.83.192 port 59836 ssh2: RSA SHA256:dWlyUpO/mrqMp/eYKItgj5U2Mvcj3EJpSdFVtC0zru4
Oct 11 04:24:13 compute-0 systemd-logind[801]: New session 19 of user zuul.
Oct 11 04:24:13 compute-0 systemd[1]: Started Session 19 of User zuul.
Oct 11 04:24:13 compute-0 sshd-session[70488]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:24:13 compute-0 sudo[70564]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdjtmobphsnicgfkagvjebkwhqtpeqem ; /usr/bin/python3'
Oct 11 04:24:13 compute-0 sudo[70564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:13 compute-0 useradd[70568]: new group: name=ceph-admin, GID=42478
Oct 11 04:24:13 compute-0 useradd[70568]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Oct 11 04:24:13 compute-0 sudo[70564]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:14 compute-0 sudo[70650]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sthklmakgtvydvpywmjzptupvlcnibtu ; /usr/bin/python3'
Oct 11 04:24:14 compute-0 sudo[70650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:14 compute-0 sudo[70650]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:14 compute-0 sudo[70723]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcvjgoccpwwnvrucifmxdljnlsvghywm ; /usr/bin/python3'
Oct 11 04:24:14 compute-0 sudo[70723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:14 compute-0 sudo[70723]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:15 compute-0 sudo[70773]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znobmberhoyynfmaiudirapttzcgrfkf ; /usr/bin/python3'
Oct 11 04:24:15 compute-0 sudo[70773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:15 compute-0 sudo[70773]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:15 compute-0 sudo[70799]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddohaxjvgxrwfgcbozvwimciyqftefef ; /usr/bin/python3'
Oct 11 04:24:15 compute-0 sudo[70799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:15 compute-0 sudo[70799]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:15 compute-0 sudo[70825]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaqqopgruwmbmohauslszxlzgeiosidb ; /usr/bin/python3'
Oct 11 04:24:15 compute-0 sudo[70825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:16 compute-0 sudo[70825]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:16 compute-0 sudo[70851]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayhdirivefqowkhezcdjaqzpbwklirll ; /usr/bin/python3'
Oct 11 04:24:16 compute-0 sudo[70851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:16 compute-0 sudo[70851]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:16 compute-0 sudo[70929]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlcogqwireqbmetnmqnygonmclxygdry ; /usr/bin/python3'
Oct 11 04:24:16 compute-0 sudo[70929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:17 compute-0 sudo[70929]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:17 compute-0 sudo[71002]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymgpjotuobakcgluozzlgvdspdyjklrq ; /usr/bin/python3'
Oct 11 04:24:17 compute-0 sudo[71002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:17 compute-0 sudo[71002]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:17 compute-0 sudo[71104]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzowbrgenhstpjtedryjxefqfdiosgia ; /usr/bin/python3'
Oct 11 04:24:17 compute-0 sudo[71104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:18 compute-0 sudo[71104]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:18 compute-0 sudo[71177]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhnfynnqppffiflblfzhybbseslfqita ; /usr/bin/python3'
Oct 11 04:24:18 compute-0 sudo[71177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:18 compute-0 sudo[71177]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:18 compute-0 sudo[71227]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwfewjtzomqrcszkheyxsvdpcludvmec ; /usr/bin/python3'
Oct 11 04:24:18 compute-0 sudo[71227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:19 compute-0 python3[71229]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:24:20 compute-0 sudo[71227]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:20 compute-0 sudo[71322]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhjesfklchffgzdzvlcmhphgrsvvbwfc ; /usr/bin/python3'
Oct 11 04:24:20 compute-0 sudo[71322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:20 compute-0 python3[71324]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 11 04:24:21 compute-0 sudo[71322]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:22 compute-0 sudo[71349]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kodxzszcraqkogtidqpkzwpuqexrmari ; /usr/bin/python3'
Oct 11 04:24:22 compute-0 sudo[71349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:22 compute-0 python3[71351]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:24:22 compute-0 sudo[71349]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:22 compute-0 sudo[71375]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkuzfingcsgfkjgcpyszurvhsnzotwau ; /usr/bin/python3'
Oct 11 04:24:22 compute-0 sudo[71375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:22 compute-0 python3[71377]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:24:22 compute-0 kernel: loop: module loaded
Oct 11 04:24:22 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Oct 11 04:24:22 compute-0 sudo[71375]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:22 compute-0 sudo[71410]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvmkgtknhtpqfkerbkydfizqtjpzybfe ; /usr/bin/python3'
Oct 11 04:24:22 compute-0 sudo[71410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:22 compute-0 python3[71412]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:24:23 compute-0 lvm[71415]: PV /dev/loop3 not used.
Oct 11 04:24:23 compute-0 lvm[71417]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 11 04:24:23 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct 11 04:24:23 compute-0 lvm[71423]:   1 logical volume(s) in volume group "ceph_vg0" now active
Oct 11 04:24:23 compute-0 lvm[71427]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 11 04:24:23 compute-0 lvm[71427]: VG ceph_vg0 finished
Oct 11 04:24:23 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct 11 04:24:23 compute-0 sudo[71410]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:23 compute-0 sudo[71503]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yysjergsziwxjlfhemgjrampbxkpzbij ; /usr/bin/python3'
Oct 11 04:24:23 compute-0 sudo[71503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:23 compute-0 python3[71505]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:24:23 compute-0 sudo[71503]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:24 compute-0 sudo[71576]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzmgvugbtngtomzxbjjlaqjkipxujamt ; /usr/bin/python3'
Oct 11 04:24:24 compute-0 sudo[71576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:24 compute-0 python3[71578]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760156663.4462519-32735-28043579261534/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:24:24 compute-0 sudo[71576]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:24 compute-0 sudo[71626]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgzxmkmitaypacslkqrmluzfufibgktt ; /usr/bin/python3'
Oct 11 04:24:24 compute-0 sudo[71626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:24 compute-0 python3[71628]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:24:24 compute-0 systemd[1]: Reloading.
Oct 11 04:24:24 compute-0 systemd-rc-local-generator[71659]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:24:24 compute-0 systemd-sysv-generator[71663]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:24:25 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct 11 04:24:25 compute-0 bash[71668]: /dev/loop3: [64513]:4555664 (/var/lib/ceph-osd-0.img)
Oct 11 04:24:25 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct 11 04:24:25 compute-0 lvm[71669]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 11 04:24:25 compute-0 lvm[71669]: VG ceph_vg0 finished
Oct 11 04:24:25 compute-0 sudo[71626]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:25 compute-0 sudo[71693]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qytdpfqrjvvxdyfcjhajgyclnxpyeexz ; /usr/bin/python3'
Oct 11 04:24:25 compute-0 sudo[71693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:25 compute-0 python3[71695]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 11 04:24:26 compute-0 sudo[71693]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:26 compute-0 sudo[71720]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mawajblfjilqvgymqvctbzofxdqmqnxz ; /usr/bin/python3'
Oct 11 04:24:26 compute-0 sudo[71720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:26 compute-0 python3[71722]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:24:26 compute-0 sudo[71720]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:27 compute-0 sudo[71746]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shpguwbwaruppyzclxlrgtidrutsbaic ; /usr/bin/python3'
Oct 11 04:24:27 compute-0 sudo[71746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:27 compute-0 python3[71748]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:24:27 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Oct 11 04:24:27 compute-0 sudo[71746]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:27 compute-0 sudo[71778]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-holufchxvhzfzqeexphqgqpseqrmqvva ; /usr/bin/python3'
Oct 11 04:24:27 compute-0 sudo[71778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:27 compute-0 python3[71780]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:24:27 compute-0 lvm[71783]: PV /dev/loop4 not used.
Oct 11 04:24:27 compute-0 lvm[71793]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 11 04:24:27 compute-0 sudo[71778]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:27 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Oct 11 04:24:27 compute-0 lvm[71795]:   1 logical volume(s) in volume group "ceph_vg1" now active
Oct 11 04:24:28 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Oct 11 04:24:28 compute-0 sudo[71871]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuwywangazrbskxdnybwxqvwgesdyijd ; /usr/bin/python3'
Oct 11 04:24:28 compute-0 sudo[71871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:28 compute-0 python3[71873]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:24:28 compute-0 sudo[71871]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:28 compute-0 sudo[71944]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrmgbiimmdapynbibkglgtjeljullkwh ; /usr/bin/python3'
Oct 11 04:24:28 compute-0 sudo[71944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:28 compute-0 python3[71946]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760156668.0573437-32762-250016084128706/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:24:28 compute-0 sudo[71944]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:29 compute-0 sudo[71994]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmqshhsctgqnakexcdqewmerizmmmekg ; /usr/bin/python3'
Oct 11 04:24:29 compute-0 sudo[71994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:29 compute-0 python3[71996]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:24:29 compute-0 systemd[1]: Reloading.
Oct 11 04:24:29 compute-0 systemd-rc-local-generator[72026]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:24:29 compute-0 systemd-sysv-generator[72029]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:24:29 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct 11 04:24:29 compute-0 bash[72036]: /dev/loop4: [64513]:4727397 (/var/lib/ceph-osd-1.img)
Oct 11 04:24:29 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct 11 04:24:29 compute-0 lvm[72037]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 11 04:24:29 compute-0 lvm[72037]: VG ceph_vg1 finished
Oct 11 04:24:29 compute-0 sudo[71994]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:29 compute-0 sudo[72061]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hykhnjojmqbabvwaxqirbaqtxoxzkjxh ; /usr/bin/python3'
Oct 11 04:24:29 compute-0 sudo[72061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:30 compute-0 python3[72063]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 11 04:24:31 compute-0 sudo[72061]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:31 compute-0 sudo[72088]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfffdhetblwqjaqwbvoemttzkgswkyao ; /usr/bin/python3'
Oct 11 04:24:31 compute-0 sudo[72088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:31 compute-0 python3[72090]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:24:31 compute-0 sudo[72088]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:31 compute-0 sudo[72114]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmnlkzwikesejolxmgeqokyixcebkhgt ; /usr/bin/python3'
Oct 11 04:24:31 compute-0 sudo[72114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:31 compute-0 python3[72116]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:24:31 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Oct 11 04:24:31 compute-0 sudo[72114]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:32 compute-0 sudo[72146]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqjtvhnvrvyftxwvsracqapjsdtmnlgo ; /usr/bin/python3'
Oct 11 04:24:32 compute-0 sudo[72146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:32 compute-0 python3[72148]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:24:32 compute-0 lvm[72151]: PV /dev/loop5 not used.
Oct 11 04:24:32 compute-0 lvm[72153]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 04:24:32 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Oct 11 04:24:32 compute-0 lvm[72156]:   1 logical volume(s) in volume group "ceph_vg2" now active
Oct 11 04:24:32 compute-0 lvm[72163]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 04:24:32 compute-0 lvm[72163]: VG ceph_vg2 finished
Oct 11 04:24:32 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Oct 11 04:24:32 compute-0 sudo[72146]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:32 compute-0 sudo[72239]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyjonjodhpwkopkqyzyjeovwtuembtxj ; /usr/bin/python3'
Oct 11 04:24:32 compute-0 sudo[72239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:33 compute-0 python3[72241]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:24:33 compute-0 sudo[72239]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:33 compute-0 sudo[72312]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhanrgpzyqdvmyguholxjlpvqqteqqhr ; /usr/bin/python3'
Oct 11 04:24:33 compute-0 sudo[72312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:33 compute-0 python3[72314]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760156672.7598827-32789-159159097983409/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:24:33 compute-0 sudo[72312]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:33 compute-0 sudo[72362]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sexlqxpgxzopedgpdxednpwsuxtqxnsk ; /usr/bin/python3'
Oct 11 04:24:33 compute-0 sudo[72362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:33 compute-0 python3[72364]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:24:34 compute-0 systemd[1]: Reloading.
Oct 11 04:24:34 compute-0 systemd-rc-local-generator[72393]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:24:34 compute-0 systemd-sysv-generator[72396]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:24:34 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct 11 04:24:34 compute-0 bash[72404]: /dev/loop5: [64513]:4812250 (/var/lib/ceph-osd-2.img)
Oct 11 04:24:34 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct 11 04:24:34 compute-0 lvm[72405]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 04:24:34 compute-0 lvm[72405]: VG ceph_vg2 finished
Oct 11 04:24:34 compute-0 sudo[72362]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:36 compute-0 python3[72429]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:24:38 compute-0 sudo[72520]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owuyrpdryofctxcwihefianozjurleyu ; /usr/bin/python3'
Oct 11 04:24:38 compute-0 sudo[72520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:38 compute-0 python3[72522]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 11 04:24:39 compute-0 groupadd[72528]: group added to /etc/group: name=cephadm, GID=992
Oct 11 04:24:39 compute-0 groupadd[72528]: group added to /etc/gshadow: name=cephadm
Oct 11 04:24:39 compute-0 groupadd[72528]: new group: name=cephadm, GID=992
Oct 11 04:24:39 compute-0 useradd[72535]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Oct 11 04:24:40 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 04:24:40 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 11 04:24:40 compute-0 sudo[72520]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:40 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 04:24:40 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 11 04:24:40 compute-0 systemd[1]: run-rd53292fff8a640b7b4e122fbd85c08f7.service: Deactivated successfully.
Oct 11 04:24:40 compute-0 sudo[72635]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuiioumrktutqqdoxcmuxiqwbdxxdxwp ; /usr/bin/python3'
Oct 11 04:24:40 compute-0 sudo[72635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:40 compute-0 python3[72638]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:24:40 compute-0 sudo[72635]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:40 compute-0 sudo[72664]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmuaztgeuqgnsfdzjrdrbtuufzmohqke ; /usr/bin/python3'
Oct 11 04:24:40 compute-0 sudo[72664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:41 compute-0 python3[72666]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:24:41 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:24:41 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:24:41 compute-0 sudo[72664]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:41 compute-0 sudo[72728]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bquhwclloggsavockgiyidcxemcbmydu ; /usr/bin/python3'
Oct 11 04:24:41 compute-0 sudo[72728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:41 compute-0 python3[72730]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:24:41 compute-0 sudo[72728]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:42 compute-0 sudo[72754]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlyddklveqphnxcmyverbgaiizivdrnd ; /usr/bin/python3'
Oct 11 04:24:42 compute-0 sudo[72754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:24:42 compute-0 python3[72756]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:24:42 compute-0 sudo[72754]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:42 compute-0 sudo[72832]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxydsidztemwnwbxbuqwcwgwpltarjwn ; /usr/bin/python3'
Oct 11 04:24:42 compute-0 sudo[72832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:42 compute-0 python3[72834]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:24:42 compute-0 sudo[72832]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:43 compute-0 sudo[72905]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcflzmpuelqxnstbpdyvkonalccmsdpv ; /usr/bin/python3'
Oct 11 04:24:43 compute-0 sudo[72905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:43 compute-0 python3[72907]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760156682.6210303-32936-209368511953551/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:24:43 compute-0 sudo[72905]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:43 compute-0 sudo[73007]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrpkmiirqgyykmyobcxwszacuicadkjn ; /usr/bin/python3'
Oct 11 04:24:43 compute-0 sudo[73007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:44 compute-0 python3[73009]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:24:44 compute-0 sudo[73007]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:44 compute-0 sudo[73080]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szctcofxorzavvfguvgahgwrkyvgykjv ; /usr/bin/python3'
Oct 11 04:24:44 compute-0 sudo[73080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:44 compute-0 python3[73082]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760156683.787834-32954-25613950962818/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:24:44 compute-0 sudo[73080]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:44 compute-0 sudo[73130]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmwejegvezzecwbrtenhqejstypcazal ; /usr/bin/python3'
Oct 11 04:24:44 compute-0 sudo[73130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:45 compute-0 python3[73132]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:24:45 compute-0 sudo[73130]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:45 compute-0 sudo[73158]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfvsndgawzoeegrllnwniywlagnlkwxq ; /usr/bin/python3'
Oct 11 04:24:45 compute-0 sudo[73158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:45 compute-0 python3[73160]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:24:45 compute-0 sudo[73158]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:45 compute-0 sudo[73187]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asdglwxelvokuamedeeedjwyzzxupplg ; /usr/bin/python3'
Oct 11 04:24:45 compute-0 sudo[73187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:45 compute-0 python3[73189]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:24:45 compute-0 sudo[73187]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:45 compute-0 sudo[73215]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrlkkwvinefbzozeqyjcjbebwvudkhco ; /usr/bin/python3'
Oct 11 04:24:45 compute-0 sudo[73215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:24:46 compute-0 python3[73217]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:24:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:24:46 compute-0 sshd-session[73233]: Accepted publickey for ceph-admin from 192.168.122.100 port 51580 ssh2: RSA SHA256:9gGYMYwZ28p/VOGMeKhrIP0HSYfKPCU43XYj7dXSptk
Oct 11 04:24:46 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct 11 04:24:46 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 11 04:24:46 compute-0 systemd-logind[801]: New session 20 of user ceph-admin.
Oct 11 04:24:46 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 11 04:24:46 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct 11 04:24:46 compute-0 systemd[73237]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 04:24:46 compute-0 systemd[73237]: Queued start job for default target Main User Target.
Oct 11 04:24:46 compute-0 systemd[73237]: Created slice User Application Slice.
Oct 11 04:24:46 compute-0 systemd[73237]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 11 04:24:46 compute-0 systemd[73237]: Started Daily Cleanup of User's Temporary Directories.
Oct 11 04:24:46 compute-0 systemd[73237]: Reached target Paths.
Oct 11 04:24:46 compute-0 systemd[73237]: Reached target Timers.
Oct 11 04:24:46 compute-0 systemd[73237]: Starting D-Bus User Message Bus Socket...
Oct 11 04:24:46 compute-0 systemd[73237]: Starting Create User's Volatile Files and Directories...
Oct 11 04:24:46 compute-0 systemd[73237]: Finished Create User's Volatile Files and Directories.
Oct 11 04:24:46 compute-0 systemd[73237]: Listening on D-Bus User Message Bus Socket.
Oct 11 04:24:46 compute-0 systemd[73237]: Reached target Sockets.
Oct 11 04:24:46 compute-0 systemd[73237]: Reached target Basic System.
Oct 11 04:24:46 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct 11 04:24:46 compute-0 systemd[73237]: Reached target Main User Target.
Oct 11 04:24:46 compute-0 systemd[73237]: Startup finished in 165ms.
Oct 11 04:24:46 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Oct 11 04:24:46 compute-0 sshd-session[73233]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 04:24:46 compute-0 sudo[73254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Oct 11 04:24:46 compute-0 sudo[73254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:24:46 compute-0 sudo[73254]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:46 compute-0 sshd-session[73253]: Received disconnect from 192.168.122.100 port 51580:11: disconnected by user
Oct 11 04:24:46 compute-0 sshd-session[73253]: Disconnected from user ceph-admin 192.168.122.100 port 51580
Oct 11 04:24:46 compute-0 sshd-session[73233]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 11 04:24:46 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Oct 11 04:24:46 compute-0 systemd-logind[801]: Session 20 logged out. Waiting for processes to exit.
Oct 11 04:24:46 compute-0 systemd-logind[801]: Removed session 20.
Oct 11 04:24:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3033210033-merged.mount: Deactivated successfully.
Oct 11 04:24:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3033210033-lower\x2dmapped.mount: Deactivated successfully.
Oct 11 04:24:57 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Oct 11 04:24:57 compute-0 systemd[73237]: Activating special unit Exit the Session...
Oct 11 04:24:57 compute-0 systemd[73237]: Stopped target Main User Target.
Oct 11 04:24:57 compute-0 systemd[73237]: Stopped target Basic System.
Oct 11 04:24:57 compute-0 systemd[73237]: Stopped target Paths.
Oct 11 04:24:57 compute-0 systemd[73237]: Stopped target Sockets.
Oct 11 04:24:57 compute-0 systemd[73237]: Stopped target Timers.
Oct 11 04:24:57 compute-0 systemd[73237]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 11 04:24:57 compute-0 systemd[73237]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 11 04:24:57 compute-0 systemd[73237]: Closed D-Bus User Message Bus Socket.
Oct 11 04:24:57 compute-0 systemd[73237]: Stopped Create User's Volatile Files and Directories.
Oct 11 04:24:57 compute-0 systemd[73237]: Removed slice User Application Slice.
Oct 11 04:24:57 compute-0 systemd[73237]: Reached target Shutdown.
Oct 11 04:24:57 compute-0 systemd[73237]: Finished Exit the Session.
Oct 11 04:24:57 compute-0 systemd[73237]: Reached target Exit the Session.
Oct 11 04:24:57 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Oct 11 04:24:57 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Oct 11 04:24:57 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 11 04:24:57 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 11 04:24:57 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 11 04:24:57 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 11 04:24:57 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Oct 11 04:25:01 compute-0 podman[73292]: 2025-10-11 04:25:01.198724037 +0000 UTC m=+14.218105429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:25:01 compute-0 podman[73353]: 2025-10-11 04:25:01.320537642 +0000 UTC m=+0.087608877 container create dd98a90f36ecb94b76bb8d725b623ec6f00b71d9a842e2f47d9736b50b6ffc00 (image=quay.io/ceph/ceph:v18, name=silly_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:25:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck3582494435-merged.mount: Deactivated successfully.
Oct 11 04:25:01 compute-0 podman[73353]: 2025-10-11 04:25:01.279031988 +0000 UTC m=+0.046103263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:01 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 11 04:25:01 compute-0 systemd[1]: Started libpod-conmon-dd98a90f36ecb94b76bb8d725b623ec6f00b71d9a842e2f47d9736b50b6ffc00.scope.
Oct 11 04:25:01 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:01 compute-0 podman[73353]: 2025-10-11 04:25:01.442634674 +0000 UTC m=+0.209705919 container init dd98a90f36ecb94b76bb8d725b623ec6f00b71d9a842e2f47d9736b50b6ffc00 (image=quay.io/ceph/ceph:v18, name=silly_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:25:01 compute-0 podman[73353]: 2025-10-11 04:25:01.450353741 +0000 UTC m=+0.217424966 container start dd98a90f36ecb94b76bb8d725b623ec6f00b71d9a842e2f47d9736b50b6ffc00 (image=quay.io/ceph/ceph:v18, name=silly_rhodes, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 04:25:01 compute-0 podman[73353]: 2025-10-11 04:25:01.454437015 +0000 UTC m=+0.221508310 container attach dd98a90f36ecb94b76bb8d725b623ec6f00b71d9a842e2f47d9736b50b6ffc00 (image=quay.io/ceph/ceph:v18, name=silly_rhodes, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:25:01 compute-0 silly_rhodes[73369]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct 11 04:25:01 compute-0 systemd[1]: libpod-dd98a90f36ecb94b76bb8d725b623ec6f00b71d9a842e2f47d9736b50b6ffc00.scope: Deactivated successfully.
Oct 11 04:25:01 compute-0 podman[73353]: 2025-10-11 04:25:01.766203424 +0000 UTC m=+0.533274659 container died dd98a90f36ecb94b76bb8d725b623ec6f00b71d9a842e2f47d9736b50b6ffc00 (image=quay.io/ceph/ceph:v18, name=silly_rhodes, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:25:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-8873ae76040052ac29998ec4fcb8c2cc612402968eac8d113648ce38f42a9750-merged.mount: Deactivated successfully.
Oct 11 04:25:01 compute-0 podman[73353]: 2025-10-11 04:25:01.833461059 +0000 UTC m=+0.600532304 container remove dd98a90f36ecb94b76bb8d725b623ec6f00b71d9a842e2f47d9736b50b6ffc00 (image=quay.io/ceph/ceph:v18, name=silly_rhodes, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:25:01 compute-0 systemd[1]: libpod-conmon-dd98a90f36ecb94b76bb8d725b623ec6f00b71d9a842e2f47d9736b50b6ffc00.scope: Deactivated successfully.
Oct 11 04:25:01 compute-0 podman[73388]: 2025-10-11 04:25:01.951559069 +0000 UTC m=+0.089537071 container create c90cfb0c30698fa0bfb36a4aca030a2d9edd382334f15667f341a9fdd4f76398 (image=quay.io/ceph/ceph:v18, name=romantic_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:25:01 compute-0 podman[73388]: 2025-10-11 04:25:01.889575062 +0000 UTC m=+0.027553123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:02 compute-0 systemd[1]: Started libpod-conmon-c90cfb0c30698fa0bfb36a4aca030a2d9edd382334f15667f341a9fdd4f76398.scope.
Oct 11 04:25:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:02 compute-0 podman[73388]: 2025-10-11 04:25:02.092598033 +0000 UTC m=+0.230576044 container init c90cfb0c30698fa0bfb36a4aca030a2d9edd382334f15667f341a9fdd4f76398 (image=quay.io/ceph/ceph:v18, name=romantic_khorana, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:25:02 compute-0 podman[73388]: 2025-10-11 04:25:02.098743745 +0000 UTC m=+0.236721756 container start c90cfb0c30698fa0bfb36a4aca030a2d9edd382334f15667f341a9fdd4f76398 (image=quay.io/ceph/ceph:v18, name=romantic_khorana, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:25:02 compute-0 romantic_khorana[73405]: 167 167
Oct 11 04:25:02 compute-0 systemd[1]: libpod-c90cfb0c30698fa0bfb36a4aca030a2d9edd382334f15667f341a9fdd4f76398.scope: Deactivated successfully.
Oct 11 04:25:02 compute-0 podman[73388]: 2025-10-11 04:25:02.105753431 +0000 UTC m=+0.243731442 container attach c90cfb0c30698fa0bfb36a4aca030a2d9edd382334f15667f341a9fdd4f76398 (image=quay.io/ceph/ceph:v18, name=romantic_khorana, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:25:02 compute-0 podman[73388]: 2025-10-11 04:25:02.106384719 +0000 UTC m=+0.244362700 container died c90cfb0c30698fa0bfb36a4aca030a2d9edd382334f15667f341a9fdd4f76398 (image=quay.io/ceph/ceph:v18, name=romantic_khorana, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 04:25:02 compute-0 podman[73388]: 2025-10-11 04:25:02.280421767 +0000 UTC m=+0.418399778 container remove c90cfb0c30698fa0bfb36a4aca030a2d9edd382334f15667f341a9fdd4f76398 (image=quay.io/ceph/ceph:v18, name=romantic_khorana, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:25:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:25:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:25:02 compute-0 systemd[1]: libpod-conmon-c90cfb0c30698fa0bfb36a4aca030a2d9edd382334f15667f341a9fdd4f76398.scope: Deactivated successfully.
Oct 11 04:25:02 compute-0 podman[73422]: 2025-10-11 04:25:02.381403878 +0000 UTC m=+0.069830369 container create 06262d2a2eab34db26a91a0449f84ede991c056f703ed7f83285adb996c7c46e (image=quay.io/ceph/ceph:v18, name=stupefied_mclaren, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:25:02 compute-0 systemd[1]: Started libpod-conmon-06262d2a2eab34db26a91a0449f84ede991c056f703ed7f83285adb996c7c46e.scope.
Oct 11 04:25:02 compute-0 podman[73422]: 2025-10-11 04:25:02.354315608 +0000 UTC m=+0.042742149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:02 compute-0 podman[73422]: 2025-10-11 04:25:02.463593191 +0000 UTC m=+0.152019712 container init 06262d2a2eab34db26a91a0449f84ede991c056f703ed7f83285adb996c7c46e (image=quay.io/ceph/ceph:v18, name=stupefied_mclaren, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:25:02 compute-0 podman[73422]: 2025-10-11 04:25:02.472045368 +0000 UTC m=+0.160471859 container start 06262d2a2eab34db26a91a0449f84ede991c056f703ed7f83285adb996c7c46e (image=quay.io/ceph/ceph:v18, name=stupefied_mclaren, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:25:02 compute-0 podman[73422]: 2025-10-11 04:25:02.475994809 +0000 UTC m=+0.164421360 container attach 06262d2a2eab34db26a91a0449f84ede991c056f703ed7f83285adb996c7c46e (image=quay.io/ceph/ceph:v18, name=stupefied_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:25:02 compute-0 stupefied_mclaren[73438]: AQAe3OloaXZoHhAAcO96Tq/+19kN/rRz0SBi8Q==
Oct 11 04:25:02 compute-0 systemd[1]: libpod-06262d2a2eab34db26a91a0449f84ede991c056f703ed7f83285adb996c7c46e.scope: Deactivated successfully.
Oct 11 04:25:02 compute-0 podman[73422]: 2025-10-11 04:25:02.516152045 +0000 UTC m=+0.204578526 container died 06262d2a2eab34db26a91a0449f84ede991c056f703ed7f83285adb996c7c46e (image=quay.io/ceph/ceph:v18, name=stupefied_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:25:02 compute-0 podman[73422]: 2025-10-11 04:25:02.56630304 +0000 UTC m=+0.254729521 container remove 06262d2a2eab34db26a91a0449f84ede991c056f703ed7f83285adb996c7c46e (image=quay.io/ceph/ceph:v18, name=stupefied_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:25:02 compute-0 systemd[1]: libpod-conmon-06262d2a2eab34db26a91a0449f84ede991c056f703ed7f83285adb996c7c46e.scope: Deactivated successfully.
Oct 11 04:25:02 compute-0 podman[73458]: 2025-10-11 04:25:02.649264266 +0000 UTC m=+0.054948431 container create 8344c8484e65d0fbab71ba81c1b79c0ff35bf8135caaf714b21aa3a4765de827 (image=quay.io/ceph/ceph:v18, name=thirsty_thompson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:25:02 compute-0 systemd[1]: Started libpod-conmon-8344c8484e65d0fbab71ba81c1b79c0ff35bf8135caaf714b21aa3a4765de827.scope.
Oct 11 04:25:02 compute-0 podman[73458]: 2025-10-11 04:25:02.623254577 +0000 UTC m=+0.028938782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:02 compute-0 podman[73458]: 2025-10-11 04:25:02.735701989 +0000 UTC m=+0.141386284 container init 8344c8484e65d0fbab71ba81c1b79c0ff35bf8135caaf714b21aa3a4765de827 (image=quay.io/ceph/ceph:v18, name=thirsty_thompson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:25:02 compute-0 podman[73458]: 2025-10-11 04:25:02.744860475 +0000 UTC m=+0.150544670 container start 8344c8484e65d0fbab71ba81c1b79c0ff35bf8135caaf714b21aa3a4765de827 (image=quay.io/ceph/ceph:v18, name=thirsty_thompson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 04:25:02 compute-0 podman[73458]: 2025-10-11 04:25:02.749747432 +0000 UTC m=+0.155431667 container attach 8344c8484e65d0fbab71ba81c1b79c0ff35bf8135caaf714b21aa3a4765de827 (image=quay.io/ceph/ceph:v18, name=thirsty_thompson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:25:02 compute-0 thirsty_thompson[73475]: AQAe3Olop1jeLRAAaqQqMOvzUXyijajgWoH9Vg==
Oct 11 04:25:02 compute-0 systemd[1]: libpod-8344c8484e65d0fbab71ba81c1b79c0ff35bf8135caaf714b21aa3a4765de827.scope: Deactivated successfully.
Oct 11 04:25:02 compute-0 podman[73458]: 2025-10-11 04:25:02.773310333 +0000 UTC m=+0.178994508 container died 8344c8484e65d0fbab71ba81c1b79c0ff35bf8135caaf714b21aa3a4765de827 (image=quay.io/ceph/ceph:v18, name=thirsty_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:25:02 compute-0 podman[73458]: 2025-10-11 04:25:02.811212605 +0000 UTC m=+0.216896760 container remove 8344c8484e65d0fbab71ba81c1b79c0ff35bf8135caaf714b21aa3a4765de827 (image=quay.io/ceph/ceph:v18, name=thirsty_thompson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:25:02 compute-0 systemd[1]: libpod-conmon-8344c8484e65d0fbab71ba81c1b79c0ff35bf8135caaf714b21aa3a4765de827.scope: Deactivated successfully.
Oct 11 04:25:02 compute-0 podman[73494]: 2025-10-11 04:25:02.884004976 +0000 UTC m=+0.050076545 container create a2abb37d05fcaa12c6d1268fafca1196a94468db1a05e5b6be2f2580c3ded679 (image=quay.io/ceph/ceph:v18, name=angry_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:25:02 compute-0 systemd[1]: Started libpod-conmon-a2abb37d05fcaa12c6d1268fafca1196a94468db1a05e5b6be2f2580c3ded679.scope.
Oct 11 04:25:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:02 compute-0 podman[73494]: 2025-10-11 04:25:02.858581703 +0000 UTC m=+0.024653312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-af328672fcbfef3f5db56236e191bffa0c0dbfe77ae3d71faec46edbfbd6d6f2-merged.mount: Deactivated successfully.
Oct 11 04:25:03 compute-0 podman[73494]: 2025-10-11 04:25:03.378843746 +0000 UTC m=+0.544915315 container init a2abb37d05fcaa12c6d1268fafca1196a94468db1a05e5b6be2f2580c3ded679 (image=quay.io/ceph/ceph:v18, name=angry_margulis, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:25:03 compute-0 podman[73494]: 2025-10-11 04:25:03.38827521 +0000 UTC m=+0.554346779 container start a2abb37d05fcaa12c6d1268fafca1196a94468db1a05e5b6be2f2580c3ded679 (image=quay.io/ceph/ceph:v18, name=angry_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:25:03 compute-0 podman[73494]: 2025-10-11 04:25:03.393892208 +0000 UTC m=+0.559963767 container attach a2abb37d05fcaa12c6d1268fafca1196a94468db1a05e5b6be2f2580c3ded679 (image=quay.io/ceph/ceph:v18, name=angry_margulis, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:25:03 compute-0 angry_margulis[73510]: AQAf3OlotrNDGRAAxWM3rNo6AttGrAQ2vcCHQA==
Oct 11 04:25:03 compute-0 systemd[1]: libpod-a2abb37d05fcaa12c6d1268fafca1196a94468db1a05e5b6be2f2580c3ded679.scope: Deactivated successfully.
Oct 11 04:25:03 compute-0 podman[73494]: 2025-10-11 04:25:03.429755433 +0000 UTC m=+0.595826992 container died a2abb37d05fcaa12c6d1268fafca1196a94468db1a05e5b6be2f2580c3ded679 (image=quay.io/ceph/ceph:v18, name=angry_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:25:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc306a82eac28f2f12e2091033c67f3f53ae5493029052c32395fb1184aab99f-merged.mount: Deactivated successfully.
Oct 11 04:25:03 compute-0 podman[73494]: 2025-10-11 04:25:03.477547303 +0000 UTC m=+0.643618842 container remove a2abb37d05fcaa12c6d1268fafca1196a94468db1a05e5b6be2f2580c3ded679 (image=quay.io/ceph/ceph:v18, name=angry_margulis, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct 11 04:25:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:25:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:25:03 compute-0 systemd[1]: libpod-conmon-a2abb37d05fcaa12c6d1268fafca1196a94468db1a05e5b6be2f2580c3ded679.scope: Deactivated successfully.
Oct 11 04:25:03 compute-0 podman[73530]: 2025-10-11 04:25:03.57629117 +0000 UTC m=+0.068271494 container create db883d368a5605e8f253f525d8adf5d48c3bb5aab3ea2284306b9acd161e2037 (image=quay.io/ceph/ceph:v18, name=practical_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:25:03 compute-0 systemd[1]: Started libpod-conmon-db883d368a5605e8f253f525d8adf5d48c3bb5aab3ea2284306b9acd161e2037.scope.
Oct 11 04:25:03 compute-0 podman[73530]: 2025-10-11 04:25:03.546057863 +0000 UTC m=+0.038038267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81fc4bc76a89a817bbbc2d2bb75376402b689ad6060b97d0f02ab361b598a8ce/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:03 compute-0 podman[73530]: 2025-10-11 04:25:03.672789735 +0000 UTC m=+0.164770119 container init db883d368a5605e8f253f525d8adf5d48c3bb5aab3ea2284306b9acd161e2037 (image=quay.io/ceph/ceph:v18, name=practical_heisenberg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:25:03 compute-0 podman[73530]: 2025-10-11 04:25:03.682482897 +0000 UTC m=+0.174463221 container start db883d368a5605e8f253f525d8adf5d48c3bb5aab3ea2284306b9acd161e2037 (image=quay.io/ceph/ceph:v18, name=practical_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:25:03 compute-0 podman[73530]: 2025-10-11 04:25:03.68687806 +0000 UTC m=+0.178858434 container attach db883d368a5605e8f253f525d8adf5d48c3bb5aab3ea2284306b9acd161e2037 (image=quay.io/ceph/ceph:v18, name=practical_heisenberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:25:03 compute-0 practical_heisenberg[73546]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct 11 04:25:03 compute-0 practical_heisenberg[73546]: setting min_mon_release = pacific
Oct 11 04:25:03 compute-0 practical_heisenberg[73546]: /usr/bin/monmaptool: set fsid to 166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:25:03 compute-0 practical_heisenberg[73546]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct 11 04:25:03 compute-0 systemd[1]: libpod-db883d368a5605e8f253f525d8adf5d48c3bb5aab3ea2284306b9acd161e2037.scope: Deactivated successfully.
Oct 11 04:25:03 compute-0 conmon[73546]: conmon db883d368a5605e8f253 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db883d368a5605e8f253f525d8adf5d48c3bb5aab3ea2284306b9acd161e2037.scope/container/memory.events
Oct 11 04:25:03 compute-0 podman[73530]: 2025-10-11 04:25:03.722475918 +0000 UTC m=+0.214456242 container died db883d368a5605e8f253f525d8adf5d48c3bb5aab3ea2284306b9acd161e2037 (image=quay.io/ceph/ceph:v18, name=practical_heisenberg, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:25:03 compute-0 podman[73530]: 2025-10-11 04:25:03.788716965 +0000 UTC m=+0.280697269 container remove db883d368a5605e8f253f525d8adf5d48c3bb5aab3ea2284306b9acd161e2037 (image=quay.io/ceph/ceph:v18, name=practical_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:25:03 compute-0 systemd[1]: libpod-conmon-db883d368a5605e8f253f525d8adf5d48c3bb5aab3ea2284306b9acd161e2037.scope: Deactivated successfully.
Oct 11 04:25:03 compute-0 podman[73567]: 2025-10-11 04:25:03.866721961 +0000 UTC m=+0.047512323 container create 6ad6568607c511132fa0c311a48852ff54abe539a5ef4d6080f6a20b7d5bb47c (image=quay.io/ceph/ceph:v18, name=busy_napier, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:25:03 compute-0 systemd[1]: Started libpod-conmon-6ad6568607c511132fa0c311a48852ff54abe539a5ef4d6080f6a20b7d5bb47c.scope.
Oct 11 04:25:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26961126e6ae9c8ffdec7d80b97e9d63d2458ef3f5bfa32827a7bf03defb584a/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26961126e6ae9c8ffdec7d80b97e9d63d2458ef3f5bfa32827a7bf03defb584a/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26961126e6ae9c8ffdec7d80b97e9d63d2458ef3f5bfa32827a7bf03defb584a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26961126e6ae9c8ffdec7d80b97e9d63d2458ef3f5bfa32827a7bf03defb584a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:03 compute-0 podman[73567]: 2025-10-11 04:25:03.846067242 +0000 UTC m=+0.026857594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:03 compute-0 podman[73567]: 2025-10-11 04:25:03.957509215 +0000 UTC m=+0.138299627 container init 6ad6568607c511132fa0c311a48852ff54abe539a5ef4d6080f6a20b7d5bb47c (image=quay.io/ceph/ceph:v18, name=busy_napier, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:25:03 compute-0 podman[73567]: 2025-10-11 04:25:03.965049186 +0000 UTC m=+0.145839508 container start 6ad6568607c511132fa0c311a48852ff54abe539a5ef4d6080f6a20b7d5bb47c (image=quay.io/ceph/ceph:v18, name=busy_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct 11 04:25:03 compute-0 podman[73567]: 2025-10-11 04:25:03.968214955 +0000 UTC m=+0.149005277 container attach 6ad6568607c511132fa0c311a48852ff54abe539a5ef4d6080f6a20b7d5bb47c (image=quay.io/ceph/ceph:v18, name=busy_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:25:04 compute-0 systemd[1]: libpod-6ad6568607c511132fa0c311a48852ff54abe539a5ef4d6080f6a20b7d5bb47c.scope: Deactivated successfully.
Oct 11 04:25:04 compute-0 conmon[73583]: conmon 6ad6568607c511132fa0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6ad6568607c511132fa0c311a48852ff54abe539a5ef4d6080f6a20b7d5bb47c.scope/container/memory.events
Oct 11 04:25:04 compute-0 podman[73567]: 2025-10-11 04:25:04.083553728 +0000 UTC m=+0.264344090 container died 6ad6568607c511132fa0c311a48852ff54abe539a5ef4d6080f6a20b7d5bb47c (image=quay.io/ceph/ceph:v18, name=busy_napier, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:25:04 compute-0 podman[73567]: 2025-10-11 04:25:04.13892359 +0000 UTC m=+0.319713952 container remove 6ad6568607c511132fa0c311a48852ff54abe539a5ef4d6080f6a20b7d5bb47c (image=quay.io/ceph/ceph:v18, name=busy_napier, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:25:04 compute-0 systemd[1]: libpod-conmon-6ad6568607c511132fa0c311a48852ff54abe539a5ef4d6080f6a20b7d5bb47c.scope: Deactivated successfully.
Oct 11 04:25:04 compute-0 systemd[1]: Reloading.
Oct 11 04:25:04 compute-0 systemd-rc-local-generator[73653]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:25:04 compute-0 systemd-sysv-generator[73656]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:25:04 compute-0 systemd[1]: Reloading.
Oct 11 04:25:04 compute-0 systemd-rc-local-generator[73687]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:25:04 compute-0 systemd-sysv-generator[73690]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:25:04 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Oct 11 04:25:04 compute-0 systemd[1]: Reloading.
Oct 11 04:25:04 compute-0 systemd-rc-local-generator[73728]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:25:04 compute-0 systemd-sysv-generator[73732]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:25:04 compute-0 systemd[1]: Reached target Ceph cluster 166d0489-2ae7-59eb-961c-c1b5cda4b45a.
Oct 11 04:25:05 compute-0 systemd[1]: Reloading.
Oct 11 04:25:05 compute-0 systemd-rc-local-generator[73767]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:25:05 compute-0 systemd-sysv-generator[73770]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:25:05 compute-0 systemd[1]: Reloading.
Oct 11 04:25:05 compute-0 systemd-sysv-generator[73811]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:25:05 compute-0 systemd-rc-local-generator[73808]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:25:05 compute-0 systemd[1]: Created slice Slice /system/ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a.
Oct 11 04:25:05 compute-0 systemd[1]: Reached target System Time Set.
Oct 11 04:25:05 compute-0 systemd[1]: Reached target System Time Synchronized.
Oct 11 04:25:05 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 166d0489-2ae7-59eb-961c-c1b5cda4b45a...
Oct 11 04:25:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:25:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:25:05 compute-0 podman[73869]: 2025-10-11 04:25:05.93628845 +0000 UTC m=+0.052608036 container create a7c209c26891d92556d040aeb6d95ee29992d283171a835e76fb374b117cd3dc (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:25:06 compute-0 podman[73869]: 2025-10-11 04:25:05.908916073 +0000 UTC m=+0.025235729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22165027e547db55d53bdaf546b37ae09b3a96c7936e8b8647bffe487d4b32b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22165027e547db55d53bdaf546b37ae09b3a96c7936e8b8647bffe487d4b32b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22165027e547db55d53bdaf546b37ae09b3a96c7936e8b8647bffe487d4b32b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22165027e547db55d53bdaf546b37ae09b3a96c7936e8b8647bffe487d4b32b0/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:06 compute-0 podman[73869]: 2025-10-11 04:25:06.111783068 +0000 UTC m=+0.228102704 container init a7c209c26891d92556d040aeb6d95ee29992d283171a835e76fb374b117cd3dc (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:25:06 compute-0 podman[73869]: 2025-10-11 04:25:06.12289815 +0000 UTC m=+0.239217746 container start a7c209c26891d92556d040aeb6d95ee29992d283171a835e76fb374b117cd3dc (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:25:06 compute-0 bash[73869]: a7c209c26891d92556d040aeb6d95ee29992d283171a835e76fb374b117cd3dc
Oct 11 04:25:06 compute-0 systemd[1]: Started Ceph mon.compute-0 for 166d0489-2ae7-59eb-961c-c1b5cda4b45a.
Oct 11 04:25:06 compute-0 ceph-mon[73889]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 04:25:06 compute-0 ceph-mon[73889]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 11 04:25:06 compute-0 ceph-mon[73889]: pidfile_write: ignore empty --pid-file
Oct 11 04:25:06 compute-0 ceph-mon[73889]: load: jerasure load: lrc 
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: RocksDB version: 7.9.2
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Git sha 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: DB SUMMARY
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: DB Session ID:  QW23VO4A35DZX3TQNV4E
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: CURRENT file:  CURRENT
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                         Options.error_if_exists: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                       Options.create_if_missing: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                                     Options.env: 0x56266fc8bc40
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                                Options.info_log: 0x562670746e80
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                              Options.statistics: (nil)
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                               Options.use_fsync: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                              Options.db_log_dir: 
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                                 Options.wal_dir: 
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                    Options.write_buffer_manager: 0x562670756b40
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                  Options.unordered_write: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                               Options.row_cache: None
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                              Options.wal_filter: None
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.two_write_queues: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.wal_compression: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.atomic_flush: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.max_background_jobs: 2
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.max_background_compactions: -1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.max_subcompactions: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.max_total_wal_size: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                          Options.max_open_files: -1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:       Options.compaction_readahead_size: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Compression algorithms supported:
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         kZSTD supported: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         kXpressCompression supported: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         kBZip2Compression supported: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         kLZ4Compression supported: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         kZlibCompression supported: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         kLZ4HCCompression supported: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         kSnappyCompression supported: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:           Options.merge_operator: 
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:        Options.compaction_filter: None
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562670746a80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56267073f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:        Options.write_buffer_size: 33554432
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:  Options.max_write_buffer_number: 2
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:          Options.compression: NoCompression
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.num_levels: 7
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7a520806-b9ee-4391-a2e1-17ca2b78e946
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156706190381, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156706225534, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156706, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "QW23VO4A35DZX3TQNV4E", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156706225766, "job": 1, "event": "recovery_finished"}
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562670768e00
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: DB pointer 0x5626707f2000
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:25:06 compute-0 ceph-mon[73889]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.4 total, 0.4 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.04              0.00         1    0.035       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.04              0.00         1    0.035       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.04              0.00         1    0.035       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.04              0.00         1    0.035       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.4 total, 0.4 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56267073f1f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 04:25:06 compute-0 ceph-mon[73889]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@-1(???) e0 preinit fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(probing) e0 win_standalone_election
Oct 11 04:25:06 compute-0 ceph-mon[73889]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct 11 04:25:06 compute-0 podman[73911]: 2025-10-11 04:25:06.581713491 +0000 UTC m=+0.136627641 container create 9cfce5bca3f36dfcfbbb9ec363e30497a2435b4da6756e516595904a6e29817c (image=quay.io/ceph/ceph:v18, name=gracious_curran, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:25:06 compute-0 podman[73911]: 2025-10-11 04:25:06.492574832 +0000 UTC m=+0.047489162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 11 04:25:06 compute-0 ceph-mon[73889]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct 11 04:25:06 compute-0 systemd[1]: Started libpod-conmon-9cfce5bca3f36dfcfbbb9ec363e30497a2435b4da6756e516595904a6e29817c.scope.
Oct 11 04:25:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b82765abb95b07a238ee72526660b5ec361c11d4342250b98a87fa35b79516/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b82765abb95b07a238ee72526660b5ec361c11d4342250b98a87fa35b79516/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b82765abb95b07a238ee72526660b5ec361c11d4342250b98a87fa35b79516/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 11 04:25:06 compute-0 ceph-mon[73889]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 11 04:25:06 compute-0 ceph-mon[73889]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 11 04:25:06 compute-0 ceph-mon[73889]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-10-11T04:25:04.021570Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025,kernel_version=5.14.0-621.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864344,os=Linux}
Oct 11 04:25:06 compute-0 podman[73911]: 2025-10-11 04:25:06.857260014 +0000 UTC m=+0.412174254 container init 9cfce5bca3f36dfcfbbb9ec363e30497a2435b4da6756e516595904a6e29817c (image=quay.io/ceph/ceph:v18, name=gracious_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).mds e1 new map
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 11 04:25:06 compute-0 ceph-mon[73889]: log_channel(cluster) log [DBG] : fsmap 
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct 11 04:25:06 compute-0 podman[73911]: 2025-10-11 04:25:06.873484309 +0000 UTC m=+0.428398479 container start 9cfce5bca3f36dfcfbbb9ec363e30497a2435b4da6756e516595904a6e29817c (image=quay.io/ceph/ceph:v18, name=gracious_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mkfs 166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:25:06 compute-0 podman[73911]: 2025-10-11 04:25:06.925595879 +0000 UTC m=+0.480510119 container attach 9cfce5bca3f36dfcfbbb9ec363e30497a2435b4da6756e516595904a6e29817c (image=quay.io/ceph/ceph:v18, name=gracious_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct 11 04:25:06 compute-0 ceph-mon[73889]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 11 04:25:06 compute-0 ceph-mon[73889]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 11 04:25:06 compute-0 ceph-mon[73889]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 11 04:25:07 compute-0 ceph-mon[73889]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 11 04:25:07 compute-0 ceph-mon[73889]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/616161605' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 04:25:07 compute-0 gracious_curran[73945]:   cluster:
Oct 11 04:25:07 compute-0 gracious_curran[73945]:     id:     166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:25:07 compute-0 gracious_curran[73945]:     health: HEALTH_OK
Oct 11 04:25:07 compute-0 gracious_curran[73945]:  
Oct 11 04:25:07 compute-0 gracious_curran[73945]:   services:
Oct 11 04:25:07 compute-0 gracious_curran[73945]:     mon: 1 daemons, quorum compute-0 (age 0.485773s)
Oct 11 04:25:07 compute-0 gracious_curran[73945]:     mgr: no daemons active
Oct 11 04:25:07 compute-0 gracious_curran[73945]:     osd: 0 osds: 0 up, 0 in
Oct 11 04:25:07 compute-0 gracious_curran[73945]:  
Oct 11 04:25:07 compute-0 gracious_curran[73945]:   data:
Oct 11 04:25:07 compute-0 gracious_curran[73945]:     pools:   0 pools, 0 pgs
Oct 11 04:25:07 compute-0 gracious_curran[73945]:     objects: 0 objects, 0 B
Oct 11 04:25:07 compute-0 gracious_curran[73945]:     usage:   0 B used, 0 B / 0 B avail
Oct 11 04:25:07 compute-0 gracious_curran[73945]:     pgs:     
Oct 11 04:25:07 compute-0 gracious_curran[73945]:  
Oct 11 04:25:07 compute-0 systemd[1]: libpod-9cfce5bca3f36dfcfbbb9ec363e30497a2435b4da6756e516595904a6e29817c.scope: Deactivated successfully.
Oct 11 04:25:07 compute-0 podman[73911]: 2025-10-11 04:25:07.303826491 +0000 UTC m=+0.858740671 container died 9cfce5bca3f36dfcfbbb9ec363e30497a2435b4da6756e516595904a6e29817c (image=quay.io/ceph/ceph:v18, name=gracious_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:25:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4b82765abb95b07a238ee72526660b5ec361c11d4342250b98a87fa35b79516-merged.mount: Deactivated successfully.
Oct 11 04:25:07 compute-0 podman[73911]: 2025-10-11 04:25:07.369677917 +0000 UTC m=+0.924592097 container remove 9cfce5bca3f36dfcfbbb9ec363e30497a2435b4da6756e516595904a6e29817c (image=quay.io/ceph/ceph:v18, name=gracious_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:25:07 compute-0 systemd[1]: libpod-conmon-9cfce5bca3f36dfcfbbb9ec363e30497a2435b4da6756e516595904a6e29817c.scope: Deactivated successfully.
Oct 11 04:25:07 compute-0 podman[73982]: 2025-10-11 04:25:07.48502063 +0000 UTC m=+0.077954266 container create b14887724552687377562f42f4c76af9ee8d0fdd3e183271a65c903a73b70fff (image=quay.io/ceph/ceph:v18, name=friendly_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:25:07 compute-0 systemd[1]: Started libpod-conmon-b14887724552687377562f42f4c76af9ee8d0fdd3e183271a65c903a73b70fff.scope.
Oct 11 04:25:07 compute-0 podman[73982]: 2025-10-11 04:25:07.45504338 +0000 UTC m=+0.047977046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f81d872dc2c3ab464d80db531e15b5a72b34ffc4b9cbe29004260f0f16cafe/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f81d872dc2c3ab464d80db531e15b5a72b34ffc4b9cbe29004260f0f16cafe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f81d872dc2c3ab464d80db531e15b5a72b34ffc4b9cbe29004260f0f16cafe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f81d872dc2c3ab464d80db531e15b5a72b34ffc4b9cbe29004260f0f16cafe/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:07 compute-0 podman[73982]: 2025-10-11 04:25:07.605096415 +0000 UTC m=+0.198030071 container init b14887724552687377562f42f4c76af9ee8d0fdd3e183271a65c903a73b70fff (image=quay.io/ceph/ceph:v18, name=friendly_heyrovsky, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:25:07 compute-0 podman[73982]: 2025-10-11 04:25:07.621147585 +0000 UTC m=+0.214081191 container start b14887724552687377562f42f4c76af9ee8d0fdd3e183271a65c903a73b70fff (image=quay.io/ceph/ceph:v18, name=friendly_heyrovsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:25:07 compute-0 podman[73982]: 2025-10-11 04:25:07.624379755 +0000 UTC m=+0.217313381 container attach b14887724552687377562f42f4c76af9ee8d0fdd3e183271a65c903a73b70fff (image=quay.io/ceph/ceph:v18, name=friendly_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:25:07 compute-0 ceph-mon[73889]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 11 04:25:07 compute-0 ceph-mon[73889]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 11 04:25:07 compute-0 ceph-mon[73889]: fsmap 
Oct 11 04:25:07 compute-0 ceph-mon[73889]: osdmap e1: 0 total, 0 up, 0 in
Oct 11 04:25:07 compute-0 ceph-mon[73889]: mgrmap e1: no daemons active
Oct 11 04:25:07 compute-0 ceph-mon[73889]: from='client.? 192.168.122.100:0/616161605' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 04:25:08 compute-0 ceph-mon[73889]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 11 04:25:08 compute-0 ceph-mon[73889]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2327184976' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 11 04:25:08 compute-0 ceph-mon[73889]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2327184976' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 11 04:25:08 compute-0 friendly_heyrovsky[73999]: 
Oct 11 04:25:08 compute-0 friendly_heyrovsky[73999]: [global]
Oct 11 04:25:08 compute-0 friendly_heyrovsky[73999]:         fsid = 166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:25:08 compute-0 friendly_heyrovsky[73999]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct 11 04:25:08 compute-0 friendly_heyrovsky[73999]:         osd_crush_chooseleaf_type = 0
Oct 11 04:25:08 compute-0 systemd[1]: libpod-b14887724552687377562f42f4c76af9ee8d0fdd3e183271a65c903a73b70fff.scope: Deactivated successfully.
Oct 11 04:25:08 compute-0 conmon[73999]: conmon b1488772455268737756 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b14887724552687377562f42f4c76af9ee8d0fdd3e183271a65c903a73b70fff.scope/container/memory.events
Oct 11 04:25:08 compute-0 podman[73982]: 2025-10-11 04:25:08.025365645 +0000 UTC m=+0.618299261 container died b14887724552687377562f42f4c76af9ee8d0fdd3e183271a65c903a73b70fff (image=quay.io/ceph/ceph:v18, name=friendly_heyrovsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:25:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-63f81d872dc2c3ab464d80db531e15b5a72b34ffc4b9cbe29004260f0f16cafe-merged.mount: Deactivated successfully.
Oct 11 04:25:08 compute-0 podman[73982]: 2025-10-11 04:25:08.073975267 +0000 UTC m=+0.666908843 container remove b14887724552687377562f42f4c76af9ee8d0fdd3e183271a65c903a73b70fff (image=quay.io/ceph/ceph:v18, name=friendly_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 11 04:25:08 compute-0 systemd[1]: libpod-conmon-b14887724552687377562f42f4c76af9ee8d0fdd3e183271a65c903a73b70fff.scope: Deactivated successfully.
Oct 11 04:25:08 compute-0 podman[74036]: 2025-10-11 04:25:08.158103555 +0000 UTC m=+0.055942789 container create 248ef0ae4eaa7e7826bcc13d75b48714c8861cae67b5d9f6f8dd8f0d79d0ae1b (image=quay.io/ceph/ceph:v18, name=tender_maxwell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:25:08 compute-0 systemd[1]: Started libpod-conmon-248ef0ae4eaa7e7826bcc13d75b48714c8861cae67b5d9f6f8dd8f0d79d0ae1b.scope.
Oct 11 04:25:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a53743825102b28fd610ce03ed509b2272b455a16bcf6a7ee6b5ea80a716968/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a53743825102b28fd610ce03ed509b2272b455a16bcf6a7ee6b5ea80a716968/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a53743825102b28fd610ce03ed509b2272b455a16bcf6a7ee6b5ea80a716968/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a53743825102b28fd610ce03ed509b2272b455a16bcf6a7ee6b5ea80a716968/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:08 compute-0 podman[74036]: 2025-10-11 04:25:08.140403099 +0000 UTC m=+0.038242363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:08 compute-0 podman[74036]: 2025-10-11 04:25:08.2367433 +0000 UTC m=+0.134582544 container init 248ef0ae4eaa7e7826bcc13d75b48714c8861cae67b5d9f6f8dd8f0d79d0ae1b (image=quay.io/ceph/ceph:v18, name=tender_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:25:08 compute-0 podman[74036]: 2025-10-11 04:25:08.246700989 +0000 UTC m=+0.144540223 container start 248ef0ae4eaa7e7826bcc13d75b48714c8861cae67b5d9f6f8dd8f0d79d0ae1b (image=quay.io/ceph/ceph:v18, name=tender_maxwell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:25:08 compute-0 podman[74036]: 2025-10-11 04:25:08.250837305 +0000 UTC m=+0.148676539 container attach 248ef0ae4eaa7e7826bcc13d75b48714c8861cae67b5d9f6f8dd8f0d79d0ae1b (image=quay.io/ceph/ceph:v18, name=tender_maxwell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:25:08 compute-0 ceph-mon[73889]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:25:08 compute-0 ceph-mon[73889]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/362048721' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:25:08 compute-0 systemd[1]: libpod-248ef0ae4eaa7e7826bcc13d75b48714c8861cae67b5d9f6f8dd8f0d79d0ae1b.scope: Deactivated successfully.
Oct 11 04:25:08 compute-0 podman[74036]: 2025-10-11 04:25:08.658279715 +0000 UTC m=+0.556118989 container died 248ef0ae4eaa7e7826bcc13d75b48714c8861cae67b5d9f6f8dd8f0d79d0ae1b (image=quay.io/ceph/ceph:v18, name=tender_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:25:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a53743825102b28fd610ce03ed509b2272b455a16bcf6a7ee6b5ea80a716968-merged.mount: Deactivated successfully.
Oct 11 04:25:08 compute-0 podman[74036]: 2025-10-11 04:25:08.71517806 +0000 UTC m=+0.613017344 container remove 248ef0ae4eaa7e7826bcc13d75b48714c8861cae67b5d9f6f8dd8f0d79d0ae1b (image=quay.io/ceph/ceph:v18, name=tender_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:25:08 compute-0 systemd[1]: libpod-conmon-248ef0ae4eaa7e7826bcc13d75b48714c8861cae67b5d9f6f8dd8f0d79d0ae1b.scope: Deactivated successfully.
Oct 11 04:25:08 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 166d0489-2ae7-59eb-961c-c1b5cda4b45a...
Oct 11 04:25:08 compute-0 ceph-mon[73889]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 11 04:25:08 compute-0 ceph-mon[73889]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 11 04:25:08 compute-0 ceph-mon[73889]: mon.compute-0@0(leader) e1 shutdown
Oct 11 04:25:08 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0[73885]: 2025-10-11T04:25:08.965+0000 7f6ce1ebf640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 11 04:25:08 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0[73885]: 2025-10-11T04:25:08.965+0000 7f6ce1ebf640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 11 04:25:08 compute-0 ceph-mon[73889]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 11 04:25:08 compute-0 ceph-mon[73889]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 11 04:25:09 compute-0 podman[74122]: 2025-10-11 04:25:09.007505884 +0000 UTC m=+0.092469323 container died a7c209c26891d92556d040aeb6d95ee29992d283171a835e76fb374b117cd3dc (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:25:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-22165027e547db55d53bdaf546b37ae09b3a96c7936e8b8647bffe487d4b32b0-merged.mount: Deactivated successfully.
Oct 11 04:25:09 compute-0 podman[74122]: 2025-10-11 04:25:09.059407669 +0000 UTC m=+0.144371088 container remove a7c209c26891d92556d040aeb6d95ee29992d283171a835e76fb374b117cd3dc (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:25:09 compute-0 bash[74122]: ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0
Oct 11 04:25:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:25:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:25:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:25:09 compute-0 systemd[1]: ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a@mon.compute-0.service: Deactivated successfully.
Oct 11 04:25:09 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 166d0489-2ae7-59eb-961c-c1b5cda4b45a.
Oct 11 04:25:09 compute-0 systemd[1]: ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a@mon.compute-0.service: Consumed 1.305s CPU time.
Oct 11 04:25:09 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 166d0489-2ae7-59eb-961c-c1b5cda4b45a...
Oct 11 04:25:09 compute-0 podman[74224]: 2025-10-11 04:25:09.578772617 +0000 UTC m=+0.060934049 container create 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 04:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1b573f22c53c8fb096d1bb39fecbf3912b97da4cbe05fa7c0f037349c87d3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1b573f22c53c8fb096d1bb39fecbf3912b97da4cbe05fa7c0f037349c87d3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1b573f22c53c8fb096d1bb39fecbf3912b97da4cbe05fa7c0f037349c87d3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1b573f22c53c8fb096d1bb39fecbf3912b97da4cbe05fa7c0f037349c87d3b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:09 compute-0 podman[74224]: 2025-10-11 04:25:09.550245257 +0000 UTC m=+0.032406699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:09 compute-0 podman[74224]: 2025-10-11 04:25:09.669239842 +0000 UTC m=+0.151401354 container init 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:25:09 compute-0 podman[74224]: 2025-10-11 04:25:09.674933382 +0000 UTC m=+0.157094834 container start 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 04:25:09 compute-0 bash[74224]: 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5
Oct 11 04:25:09 compute-0 systemd[1]: Started Ceph mon.compute-0 for 166d0489-2ae7-59eb-961c-c1b5cda4b45a.
Oct 11 04:25:09 compute-0 ceph-mon[74243]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 04:25:09 compute-0 ceph-mon[74243]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 11 04:25:09 compute-0 ceph-mon[74243]: pidfile_write: ignore empty --pid-file
Oct 11 04:25:09 compute-0 ceph-mon[74243]: load: jerasure load: lrc 
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: RocksDB version: 7.9.2
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Git sha 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: DB SUMMARY
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: DB Session ID:  LQX577T3ABHDCRGGY4EM
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: CURRENT file:  CURRENT
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 52074 ; 
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                         Options.error_if_exists: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                       Options.create_if_missing: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                                     Options.env: 0x563d461c8c40
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                                Options.info_log: 0x563d484ab040
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                              Options.statistics: (nil)
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                               Options.use_fsync: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                              Options.db_log_dir: 
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                                 Options.wal_dir: 
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                    Options.write_buffer_manager: 0x563d484bab40
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                  Options.unordered_write: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                               Options.row_cache: None
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                              Options.wal_filter: None
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.two_write_queues: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.wal_compression: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.atomic_flush: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.max_background_jobs: 2
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.max_background_compactions: -1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.max_subcompactions: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.max_total_wal_size: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                          Options.max_open_files: -1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:       Options.compaction_readahead_size: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Compression algorithms supported:
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         kZSTD supported: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         kXpressCompression supported: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         kBZip2Compression supported: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         kLZ4Compression supported: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         kZlibCompression supported: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         kLZ4HCCompression supported: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         kSnappyCompression supported: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:           Options.merge_operator: 
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:        Options.compaction_filter: None
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563d484aac40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563d484a31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:        Options.write_buffer_size: 33554432
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:  Options.max_write_buffer_number: 2
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:          Options.compression: NoCompression
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.num_levels: 7
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7a520806-b9ee-4391-a2e1-17ca2b78e946
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156709727600, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156709732312, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 51790, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 129, "table_properties": {"data_size": 50347, "index_size": 149, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2940, "raw_average_key_size": 30, "raw_value_size": 48026, "raw_average_value_size": 500, "num_data_blocks": 7, "num_entries": 96, "num_filter_entries": 96, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156709, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156709732456, "job": 1, "event": "recovery_finished"}
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563d484cce00
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: DB pointer 0x563d485d4000
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:25:09 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   52.47 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0   52.47 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 2.78 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 2.78 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563d484a31f0#2 capacity: 512.00 MB usage: 5.80 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.34 KB,6.55651e-05%) Misc(3,5.03 KB,0.000959635%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 04:25:09 compute-0 ceph-mon[74243]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:25:09 compute-0 ceph-mon[74243]: mon.compute-0@-1(???) e1 preinit fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:25:09 compute-0 ceph-mon[74243]: mon.compute-0@-1(???).mds e1 new map
Oct 11 04:25:09 compute-0 ceph-mon[74243]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Oct 11 04:25:09 compute-0 ceph-mon[74243]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 11 04:25:09 compute-0 ceph-mon[74243]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 11 04:25:09 compute-0 ceph-mon[74243]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 11 04:25:09 compute-0 ceph-mon[74243]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 11 04:25:09 compute-0 ceph-mon[74243]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct 11 04:25:09 compute-0 ceph-mon[74243]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct 11 04:25:09 compute-0 ceph-mon[74243]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 11 04:25:09 compute-0 ceph-mon[74243]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct 11 04:25:09 compute-0 ceph-mon[74243]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 11 04:25:09 compute-0 ceph-mon[74243]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 11 04:25:09 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 11 04:25:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 11 04:25:09 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : fsmap 
Oct 11 04:25:09 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 11 04:25:09 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 11 04:25:09 compute-0 podman[74244]: 2025-10-11 04:25:09.786555641 +0000 UTC m=+0.061382082 container create 6d07c0ef6a7ceee9254883e6c4b00c768bbc3499dd6c6f54ceaeb153ff709cd6 (image=quay.io/ceph/ceph:v18, name=epic_panini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:25:09 compute-0 ceph-mon[74243]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 11 04:25:09 compute-0 ceph-mon[74243]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 11 04:25:09 compute-0 ceph-mon[74243]: fsmap 
Oct 11 04:25:09 compute-0 ceph-mon[74243]: osdmap e1: 0 total, 0 up, 0 in
Oct 11 04:25:09 compute-0 ceph-mon[74243]: mgrmap e1: no daemons active
Oct 11 04:25:09 compute-0 systemd[1]: Started libpod-conmon-6d07c0ef6a7ceee9254883e6c4b00c768bbc3499dd6c6f54ceaeb153ff709cd6.scope.
Oct 11 04:25:09 compute-0 podman[74244]: 2025-10-11 04:25:09.759705358 +0000 UTC m=+0.034531839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50d48f615c60433a7de20988e47b8e4a70c63dc0cc3723f088775a7e11e4e726/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50d48f615c60433a7de20988e47b8e4a70c63dc0cc3723f088775a7e11e4e726/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50d48f615c60433a7de20988e47b8e4a70c63dc0cc3723f088775a7e11e4e726/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:09 compute-0 podman[74244]: 2025-10-11 04:25:09.897000567 +0000 UTC m=+0.171827028 container init 6d07c0ef6a7ceee9254883e6c4b00c768bbc3499dd6c6f54ceaeb153ff709cd6 (image=quay.io/ceph/ceph:v18, name=epic_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:25:09 compute-0 podman[74244]: 2025-10-11 04:25:09.908834888 +0000 UTC m=+0.183661329 container start 6d07c0ef6a7ceee9254883e6c4b00c768bbc3499dd6c6f54ceaeb153ff709cd6 (image=quay.io/ceph/ceph:v18, name=epic_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:25:09 compute-0 podman[74244]: 2025-10-11 04:25:09.912863401 +0000 UTC m=+0.187689862 container attach 6d07c0ef6a7ceee9254883e6c4b00c768bbc3499dd6c6f54ceaeb153ff709cd6 (image=quay.io/ceph/ceph:v18, name=epic_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:25:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Oct 11 04:25:10 compute-0 systemd[1]: libpod-6d07c0ef6a7ceee9254883e6c4b00c768bbc3499dd6c6f54ceaeb153ff709cd6.scope: Deactivated successfully.
Oct 11 04:25:10 compute-0 podman[74244]: 2025-10-11 04:25:10.36197103 +0000 UTC m=+0.636797561 container died 6d07c0ef6a7ceee9254883e6c4b00c768bbc3499dd6c6f54ceaeb153ff709cd6 (image=quay.io/ceph/ceph:v18, name=epic_panini, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 04:25:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-50d48f615c60433a7de20988e47b8e4a70c63dc0cc3723f088775a7e11e4e726-merged.mount: Deactivated successfully.
Oct 11 04:25:10 compute-0 podman[74244]: 2025-10-11 04:25:10.418891495 +0000 UTC m=+0.693717976 container remove 6d07c0ef6a7ceee9254883e6c4b00c768bbc3499dd6c6f54ceaeb153ff709cd6 (image=quay.io/ceph/ceph:v18, name=epic_panini, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:25:10 compute-0 systemd[1]: libpod-conmon-6d07c0ef6a7ceee9254883e6c4b00c768bbc3499dd6c6f54ceaeb153ff709cd6.scope: Deactivated successfully.
Oct 11 04:25:10 compute-0 podman[74339]: 2025-10-11 04:25:10.501752128 +0000 UTC m=+0.053742808 container create 0c92faed03a69a21bdf18559644b532d5bc7ff4cc719970c403e1f2d5dbc0866 (image=quay.io/ceph/ceph:v18, name=lucid_lewin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:25:10 compute-0 systemd[1]: Started libpod-conmon-0c92faed03a69a21bdf18559644b532d5bc7ff4cc719970c403e1f2d5dbc0866.scope.
Oct 11 04:25:10 compute-0 podman[74339]: 2025-10-11 04:25:10.478517507 +0000 UTC m=+0.030508187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64217098259c3c20b379fbcebd2653a475193296004bf6039482e60a2ea5c2a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64217098259c3c20b379fbcebd2653a475193296004bf6039482e60a2ea5c2a5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64217098259c3c20b379fbcebd2653a475193296004bf6039482e60a2ea5c2a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:10 compute-0 podman[74339]: 2025-10-11 04:25:10.608080758 +0000 UTC m=+0.160071488 container init 0c92faed03a69a21bdf18559644b532d5bc7ff4cc719970c403e1f2d5dbc0866 (image=quay.io/ceph/ceph:v18, name=lucid_lewin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:25:10 compute-0 podman[74339]: 2025-10-11 04:25:10.617474902 +0000 UTC m=+0.169465592 container start 0c92faed03a69a21bdf18559644b532d5bc7ff4cc719970c403e1f2d5dbc0866 (image=quay.io/ceph/ceph:v18, name=lucid_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:25:10 compute-0 podman[74339]: 2025-10-11 04:25:10.621394011 +0000 UTC m=+0.173384701 container attach 0c92faed03a69a21bdf18559644b532d5bc7ff4cc719970c403e1f2d5dbc0866 (image=quay.io/ceph/ceph:v18, name=lucid_lewin, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:25:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Oct 11 04:25:11 compute-0 systemd[1]: libpod-0c92faed03a69a21bdf18559644b532d5bc7ff4cc719970c403e1f2d5dbc0866.scope: Deactivated successfully.
Oct 11 04:25:11 compute-0 podman[74381]: 2025-10-11 04:25:11.112316851 +0000 UTC m=+0.041035890 container died 0c92faed03a69a21bdf18559644b532d5bc7ff4cc719970c403e1f2d5dbc0866 (image=quay.io/ceph/ceph:v18, name=lucid_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:25:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-64217098259c3c20b379fbcebd2653a475193296004bf6039482e60a2ea5c2a5-merged.mount: Deactivated successfully.
Oct 11 04:25:11 compute-0 podman[74381]: 2025-10-11 04:25:11.20469061 +0000 UTC m=+0.133409619 container remove 0c92faed03a69a21bdf18559644b532d5bc7ff4cc719970c403e1f2d5dbc0866 (image=quay.io/ceph/ceph:v18, name=lucid_lewin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:25:11 compute-0 systemd[1]: libpod-conmon-0c92faed03a69a21bdf18559644b532d5bc7ff4cc719970c403e1f2d5dbc0866.scope: Deactivated successfully.
Oct 11 04:25:11 compute-0 systemd[1]: Reloading.
Oct 11 04:25:11 compute-0 systemd-rc-local-generator[74423]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:25:11 compute-0 systemd-sysv-generator[74427]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:25:11 compute-0 systemd[1]: Reloading.
Oct 11 04:25:11 compute-0 systemd-rc-local-generator[74463]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:25:11 compute-0 systemd-sysv-generator[74470]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:25:11 compute-0 systemd[1]: Starting Ceph mgr.compute-0.phooxi for 166d0489-2ae7-59eb-961c-c1b5cda4b45a...
Oct 11 04:25:12 compute-0 podman[74522]: 2025-10-11 04:25:12.306094912 +0000 UTC m=+0.071170366 container create bf5fc967f47389620ec0fa563ce7b837539cc52438512c371057583b8fa21068 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct 11 04:25:12 compute-0 podman[74522]: 2025-10-11 04:25:12.273934231 +0000 UTC m=+0.039009745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0209da1ee4336053d2ec8f7e67d2e8b9870b1a575ac26d982d10a4b5323ae56e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0209da1ee4336053d2ec8f7e67d2e8b9870b1a575ac26d982d10a4b5323ae56e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0209da1ee4336053d2ec8f7e67d2e8b9870b1a575ac26d982d10a4b5323ae56e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0209da1ee4336053d2ec8f7e67d2e8b9870b1a575ac26d982d10a4b5323ae56e/merged/var/lib/ceph/mgr/ceph-compute-0.phooxi supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:12 compute-0 podman[74522]: 2025-10-11 04:25:12.405043236 +0000 UTC m=+0.170118740 container init bf5fc967f47389620ec0fa563ce7b837539cc52438512c371057583b8fa21068 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:25:12 compute-0 podman[74522]: 2025-10-11 04:25:12.412299069 +0000 UTC m=+0.177374483 container start bf5fc967f47389620ec0fa563ce7b837539cc52438512c371057583b8fa21068 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:25:12 compute-0 bash[74522]: bf5fc967f47389620ec0fa563ce7b837539cc52438512c371057583b8fa21068
Oct 11 04:25:12 compute-0 systemd[1]: Started Ceph mgr.compute-0.phooxi for 166d0489-2ae7-59eb-961c-c1b5cda4b45a.
Oct 11 04:25:12 compute-0 ceph-mgr[74542]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 04:25:12 compute-0 ceph-mgr[74542]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 11 04:25:12 compute-0 ceph-mgr[74542]: pidfile_write: ignore empty --pid-file
Oct 11 04:25:12 compute-0 podman[74543]: 2025-10-11 04:25:12.545232535 +0000 UTC m=+0.079766337 container create 22dff0b6fb148031ca58ed5e825150ba5c081ad1e07b8b8b39f34b198d88689d (image=quay.io/ceph/ceph:v18, name=festive_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 04:25:12 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'alerts'
Oct 11 04:25:12 compute-0 systemd[1]: Started libpod-conmon-22dff0b6fb148031ca58ed5e825150ba5c081ad1e07b8b8b39f34b198d88689d.scope.
Oct 11 04:25:12 compute-0 podman[74543]: 2025-10-11 04:25:12.513168106 +0000 UTC m=+0.047701978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/401dc02294ba3cebb2a5a73b325f7afac1b90b16bfe89e33207ccd756a5678f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/401dc02294ba3cebb2a5a73b325f7afac1b90b16bfe89e33207ccd756a5678f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/401dc02294ba3cebb2a5a73b325f7afac1b90b16bfe89e33207ccd756a5678f3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:12 compute-0 podman[74543]: 2025-10-11 04:25:12.679034886 +0000 UTC m=+0.213568728 container init 22dff0b6fb148031ca58ed5e825150ba5c081ad1e07b8b8b39f34b198d88689d (image=quay.io/ceph/ceph:v18, name=festive_morse, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:25:12 compute-0 podman[74543]: 2025-10-11 04:25:12.691385002 +0000 UTC m=+0.225918794 container start 22dff0b6fb148031ca58ed5e825150ba5c081ad1e07b8b8b39f34b198d88689d (image=quay.io/ceph/ceph:v18, name=festive_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:25:12 compute-0 podman[74543]: 2025-10-11 04:25:12.695456346 +0000 UTC m=+0.229990198 container attach 22dff0b6fb148031ca58ed5e825150ba5c081ad1e07b8b8b39f34b198d88689d (image=quay.io/ceph/ceph:v18, name=festive_morse, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:25:12 compute-0 ceph-mgr[74542]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 11 04:25:12 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'balancer'
Oct 11 04:25:12 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:12.864+0000 7fe3c18d6140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 11 04:25:13 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 04:25:13 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3059838566' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:25:13 compute-0 ceph-mgr[74542]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 11 04:25:13 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'cephadm'
Oct 11 04:25:13 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:13.106+0000 7fe3c18d6140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 11 04:25:13 compute-0 festive_morse[74583]: 
Oct 11 04:25:13 compute-0 festive_morse[74583]: {
Oct 11 04:25:13 compute-0 festive_morse[74583]:     "fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:25:13 compute-0 festive_morse[74583]:     "health": {
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "status": "HEALTH_OK",
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "checks": {},
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "mutes": []
Oct 11 04:25:13 compute-0 festive_morse[74583]:     },
Oct 11 04:25:13 compute-0 festive_morse[74583]:     "election_epoch": 5,
Oct 11 04:25:13 compute-0 festive_morse[74583]:     "quorum": [
Oct 11 04:25:13 compute-0 festive_morse[74583]:         0
Oct 11 04:25:13 compute-0 festive_morse[74583]:     ],
Oct 11 04:25:13 compute-0 festive_morse[74583]:     "quorum_names": [
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "compute-0"
Oct 11 04:25:13 compute-0 festive_morse[74583]:     ],
Oct 11 04:25:13 compute-0 festive_morse[74583]:     "quorum_age": 3,
Oct 11 04:25:13 compute-0 festive_morse[74583]:     "monmap": {
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "epoch": 1,
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "min_mon_release_name": "reef",
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "num_mons": 1
Oct 11 04:25:13 compute-0 festive_morse[74583]:     },
Oct 11 04:25:13 compute-0 festive_morse[74583]:     "osdmap": {
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "epoch": 1,
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "num_osds": 0,
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "num_up_osds": 0,
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "osd_up_since": 0,
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "num_in_osds": 0,
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "osd_in_since": 0,
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "num_remapped_pgs": 0
Oct 11 04:25:13 compute-0 festive_morse[74583]:     },
Oct 11 04:25:13 compute-0 festive_morse[74583]:     "pgmap": {
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "pgs_by_state": [],
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "num_pgs": 0,
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "num_pools": 0,
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "num_objects": 0,
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "data_bytes": 0,
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "bytes_used": 0,
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "bytes_avail": 0,
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "bytes_total": 0
Oct 11 04:25:13 compute-0 festive_morse[74583]:     },
Oct 11 04:25:13 compute-0 festive_morse[74583]:     "fsmap": {
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "epoch": 1,
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "by_rank": [],
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "up:standby": 0
Oct 11 04:25:13 compute-0 festive_morse[74583]:     },
Oct 11 04:25:13 compute-0 festive_morse[74583]:     "mgrmap": {
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "available": false,
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "num_standbys": 0,
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "modules": [
Oct 11 04:25:13 compute-0 festive_morse[74583]:             "iostat",
Oct 11 04:25:13 compute-0 festive_morse[74583]:             "nfs",
Oct 11 04:25:13 compute-0 festive_morse[74583]:             "restful"
Oct 11 04:25:13 compute-0 festive_morse[74583]:         ],
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "services": {}
Oct 11 04:25:13 compute-0 festive_morse[74583]:     },
Oct 11 04:25:13 compute-0 festive_morse[74583]:     "servicemap": {
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "epoch": 1,
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "modified": "2025-10-11T04:25:06.859773+0000",
Oct 11 04:25:13 compute-0 festive_morse[74583]:         "services": {}
Oct 11 04:25:13 compute-0 festive_morse[74583]:     },
Oct 11 04:25:13 compute-0 festive_morse[74583]:     "progress_events": {}
Oct 11 04:25:13 compute-0 festive_morse[74583]: }
Oct 11 04:25:13 compute-0 systemd[1]: libpod-22dff0b6fb148031ca58ed5e825150ba5c081ad1e07b8b8b39f34b198d88689d.scope: Deactivated successfully.
Oct 11 04:25:13 compute-0 podman[74543]: 2025-10-11 04:25:13.140898482 +0000 UTC m=+0.675432274 container died 22dff0b6fb148031ca58ed5e825150ba5c081ad1e07b8b8b39f34b198d88689d (image=quay.io/ceph/ceph:v18, name=festive_morse, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:25:13 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3059838566' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:25:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-401dc02294ba3cebb2a5a73b325f7afac1b90b16bfe89e33207ccd756a5678f3-merged.mount: Deactivated successfully.
Oct 11 04:25:13 compute-0 podman[74543]: 2025-10-11 04:25:13.193812965 +0000 UTC m=+0.728346727 container remove 22dff0b6fb148031ca58ed5e825150ba5c081ad1e07b8b8b39f34b198d88689d (image=quay.io/ceph/ceph:v18, name=festive_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:25:13 compute-0 systemd[1]: libpod-conmon-22dff0b6fb148031ca58ed5e825150ba5c081ad1e07b8b8b39f34b198d88689d.scope: Deactivated successfully.
Oct 11 04:25:15 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'crash'
Oct 11 04:25:15 compute-0 ceph-mgr[74542]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 11 04:25:15 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'dashboard'
Oct 11 04:25:15 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:15.290+0000 7fe3c18d6140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 11 04:25:15 compute-0 podman[74631]: 2025-10-11 04:25:15.297442078 +0000 UTC m=+0.065534588 container create 8ec3c51d0a623716f5f8ca9cf6fd1d5bada4db7610488657575959e5dd7e1aed (image=quay.io/ceph/ceph:v18, name=angry_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:25:15 compute-0 systemd[1]: Started libpod-conmon-8ec3c51d0a623716f5f8ca9cf6fd1d5bada4db7610488657575959e5dd7e1aed.scope.
Oct 11 04:25:15 compute-0 podman[74631]: 2025-10-11 04:25:15.26645243 +0000 UTC m=+0.034544960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10372469a7248338466d6b091d29ae681bd15dd95a284b3f87537c597244cbf6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10372469a7248338466d6b091d29ae681bd15dd95a284b3f87537c597244cbf6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10372469a7248338466d6b091d29ae681bd15dd95a284b3f87537c597244cbf6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:15 compute-0 podman[74631]: 2025-10-11 04:25:15.390394584 +0000 UTC m=+0.158487094 container init 8ec3c51d0a623716f5f8ca9cf6fd1d5bada4db7610488657575959e5dd7e1aed (image=quay.io/ceph/ceph:v18, name=angry_pare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:25:15 compute-0 podman[74631]: 2025-10-11 04:25:15.399835748 +0000 UTC m=+0.167928278 container start 8ec3c51d0a623716f5f8ca9cf6fd1d5bada4db7610488657575959e5dd7e1aed (image=quay.io/ceph/ceph:v18, name=angry_pare, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:25:15 compute-0 podman[74631]: 2025-10-11 04:25:15.404554291 +0000 UTC m=+0.172646821 container attach 8ec3c51d0a623716f5f8ca9cf6fd1d5bada4db7610488657575959e5dd7e1aed (image=quay.io/ceph/ceph:v18, name=angry_pare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:25:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 04:25:15 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1431407598' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:25:15 compute-0 angry_pare[74647]: 
Oct 11 04:25:15 compute-0 angry_pare[74647]: {
Oct 11 04:25:15 compute-0 angry_pare[74647]:     "fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:25:15 compute-0 angry_pare[74647]:     "health": {
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "status": "HEALTH_OK",
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "checks": {},
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "mutes": []
Oct 11 04:25:15 compute-0 angry_pare[74647]:     },
Oct 11 04:25:15 compute-0 angry_pare[74647]:     "election_epoch": 5,
Oct 11 04:25:15 compute-0 angry_pare[74647]:     "quorum": [
Oct 11 04:25:15 compute-0 angry_pare[74647]:         0
Oct 11 04:25:15 compute-0 angry_pare[74647]:     ],
Oct 11 04:25:15 compute-0 angry_pare[74647]:     "quorum_names": [
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "compute-0"
Oct 11 04:25:15 compute-0 angry_pare[74647]:     ],
Oct 11 04:25:15 compute-0 angry_pare[74647]:     "quorum_age": 6,
Oct 11 04:25:15 compute-0 angry_pare[74647]:     "monmap": {
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "epoch": 1,
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "min_mon_release_name": "reef",
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "num_mons": 1
Oct 11 04:25:15 compute-0 angry_pare[74647]:     },
Oct 11 04:25:15 compute-0 angry_pare[74647]:     "osdmap": {
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "epoch": 1,
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "num_osds": 0,
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "num_up_osds": 0,
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "osd_up_since": 0,
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "num_in_osds": 0,
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "osd_in_since": 0,
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "num_remapped_pgs": 0
Oct 11 04:25:15 compute-0 angry_pare[74647]:     },
Oct 11 04:25:15 compute-0 angry_pare[74647]:     "pgmap": {
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "pgs_by_state": [],
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "num_pgs": 0,
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "num_pools": 0,
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "num_objects": 0,
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "data_bytes": 0,
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "bytes_used": 0,
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "bytes_avail": 0,
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "bytes_total": 0
Oct 11 04:25:15 compute-0 angry_pare[74647]:     },
Oct 11 04:25:15 compute-0 angry_pare[74647]:     "fsmap": {
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "epoch": 1,
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "by_rank": [],
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "up:standby": 0
Oct 11 04:25:15 compute-0 angry_pare[74647]:     },
Oct 11 04:25:15 compute-0 angry_pare[74647]:     "mgrmap": {
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "available": false,
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "num_standbys": 0,
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "modules": [
Oct 11 04:25:15 compute-0 angry_pare[74647]:             "iostat",
Oct 11 04:25:15 compute-0 angry_pare[74647]:             "nfs",
Oct 11 04:25:15 compute-0 angry_pare[74647]:             "restful"
Oct 11 04:25:15 compute-0 angry_pare[74647]:         ],
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "services": {}
Oct 11 04:25:15 compute-0 angry_pare[74647]:     },
Oct 11 04:25:15 compute-0 angry_pare[74647]:     "servicemap": {
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "epoch": 1,
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "modified": "2025-10-11T04:25:06.859773+0000",
Oct 11 04:25:15 compute-0 angry_pare[74647]:         "services": {}
Oct 11 04:25:15 compute-0 angry_pare[74647]:     },
Oct 11 04:25:15 compute-0 angry_pare[74647]:     "progress_events": {}
Oct 11 04:25:15 compute-0 angry_pare[74647]: }
Oct 11 04:25:15 compute-0 systemd[1]: libpod-8ec3c51d0a623716f5f8ca9cf6fd1d5bada4db7610488657575959e5dd7e1aed.scope: Deactivated successfully.
Oct 11 04:25:15 compute-0 podman[74631]: 2025-10-11 04:25:15.845237113 +0000 UTC m=+0.613329643 container died 8ec3c51d0a623716f5f8ca9cf6fd1d5bada4db7610488657575959e5dd7e1aed (image=quay.io/ceph/ceph:v18, name=angry_pare, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:25:15 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1431407598' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:25:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-10372469a7248338466d6b091d29ae681bd15dd95a284b3f87537c597244cbf6-merged.mount: Deactivated successfully.
Oct 11 04:25:15 compute-0 podman[74631]: 2025-10-11 04:25:15.902260131 +0000 UTC m=+0.670352621 container remove 8ec3c51d0a623716f5f8ca9cf6fd1d5bada4db7610488657575959e5dd7e1aed (image=quay.io/ceph/ceph:v18, name=angry_pare, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:25:15 compute-0 systemd[1]: libpod-conmon-8ec3c51d0a623716f5f8ca9cf6fd1d5bada4db7610488657575959e5dd7e1aed.scope: Deactivated successfully.
Oct 11 04:25:16 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'devicehealth'
Oct 11 04:25:16 compute-0 ceph-mgr[74542]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 11 04:25:16 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'diskprediction_local'
Oct 11 04:25:16 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:16.896+0000 7fe3c18d6140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 11 04:25:17 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 11 04:25:17 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 11 04:25:17 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]:   from numpy import show_config as show_numpy_config
Oct 11 04:25:17 compute-0 ceph-mgr[74542]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 11 04:25:17 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'influx'
Oct 11 04:25:17 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:17.431+0000 7fe3c18d6140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 11 04:25:17 compute-0 ceph-mgr[74542]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 11 04:25:17 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'insights'
Oct 11 04:25:17 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:17.664+0000 7fe3c18d6140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 11 04:25:17 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'iostat'
Oct 11 04:25:17 compute-0 podman[74687]: 2025-10-11 04:25:17.98889435 +0000 UTC m=+0.060430105 container create e73a31a2ba0e6fae11e49cbc9859b7fc22f5c9b04f91925081c512f12f70208d (image=quay.io/ceph/ceph:v18, name=hungry_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:25:18 compute-0 systemd[1]: Started libpod-conmon-e73a31a2ba0e6fae11e49cbc9859b7fc22f5c9b04f91925081c512f12f70208d.scope.
Oct 11 04:25:18 compute-0 podman[74687]: 2025-10-11 04:25:17.961297036 +0000 UTC m=+0.032832891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4db2731a45eaaea21eb1535926d7970425dea646d49570fb062c2920d4611529/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4db2731a45eaaea21eb1535926d7970425dea646d49570fb062c2920d4611529/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4db2731a45eaaea21eb1535926d7970425dea646d49570fb062c2920d4611529/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:18 compute-0 podman[74687]: 2025-10-11 04:25:18.089686775 +0000 UTC m=+0.161222580 container init e73a31a2ba0e6fae11e49cbc9859b7fc22f5c9b04f91925081c512f12f70208d (image=quay.io/ceph/ceph:v18, name=hungry_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:25:18 compute-0 podman[74687]: 2025-10-11 04:25:18.099649574 +0000 UTC m=+0.171185359 container start e73a31a2ba0e6fae11e49cbc9859b7fc22f5c9b04f91925081c512f12f70208d (image=quay.io/ceph/ceph:v18, name=hungry_heisenberg, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:25:18 compute-0 podman[74687]: 2025-10-11 04:25:18.111400973 +0000 UTC m=+0.182936818 container attach e73a31a2ba0e6fae11e49cbc9859b7fc22f5c9b04f91925081c512f12f70208d (image=quay.io/ceph/ceph:v18, name=hungry_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 04:25:18 compute-0 ceph-mgr[74542]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 11 04:25:18 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'k8sevents'
Oct 11 04:25:18 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:18.123+0000 7fe3c18d6140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 11 04:25:18 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 04:25:18 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/849609075' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]: 
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]: {
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     "fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     "health": {
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "status": "HEALTH_OK",
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "checks": {},
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "mutes": []
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     },
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     "election_epoch": 5,
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     "quorum": [
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         0
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     ],
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     "quorum_names": [
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "compute-0"
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     ],
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     "quorum_age": 8,
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     "monmap": {
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "epoch": 1,
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "min_mon_release_name": "reef",
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "num_mons": 1
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     },
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     "osdmap": {
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "epoch": 1,
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "num_osds": 0,
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "num_up_osds": 0,
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "osd_up_since": 0,
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "num_in_osds": 0,
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "osd_in_since": 0,
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "num_remapped_pgs": 0
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     },
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     "pgmap": {
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "pgs_by_state": [],
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "num_pgs": 0,
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "num_pools": 0,
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "num_objects": 0,
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "data_bytes": 0,
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "bytes_used": 0,
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "bytes_avail": 0,
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "bytes_total": 0
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     },
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     "fsmap": {
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "epoch": 1,
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "by_rank": [],
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "up:standby": 0
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     },
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     "mgrmap": {
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "available": false,
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "num_standbys": 0,
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "modules": [
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:             "iostat",
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:             "nfs",
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:             "restful"
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         ],
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "services": {}
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     },
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     "servicemap": {
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "epoch": 1,
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "modified": "2025-10-11T04:25:06.859773+0000",
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:         "services": {}
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     },
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]:     "progress_events": {}
Oct 11 04:25:18 compute-0 hungry_heisenberg[74703]: }
Oct 11 04:25:18 compute-0 systemd[1]: libpod-e73a31a2ba0e6fae11e49cbc9859b7fc22f5c9b04f91925081c512f12f70208d.scope: Deactivated successfully.
Oct 11 04:25:18 compute-0 podman[74687]: 2025-10-11 04:25:18.542801015 +0000 UTC m=+0.614336770 container died e73a31a2ba0e6fae11e49cbc9859b7fc22f5c9b04f91925081c512f12f70208d (image=quay.io/ceph/ceph:v18, name=hungry_heisenberg, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:25:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-4db2731a45eaaea21eb1535926d7970425dea646d49570fb062c2920d4611529-merged.mount: Deactivated successfully.
Oct 11 04:25:18 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/849609075' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:25:18 compute-0 podman[74687]: 2025-10-11 04:25:18.604579346 +0000 UTC m=+0.676115131 container remove e73a31a2ba0e6fae11e49cbc9859b7fc22f5c9b04f91925081c512f12f70208d (image=quay.io/ceph/ceph:v18, name=hungry_heisenberg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:25:18 compute-0 systemd[1]: libpod-conmon-e73a31a2ba0e6fae11e49cbc9859b7fc22f5c9b04f91925081c512f12f70208d.scope: Deactivated successfully.
Oct 11 04:25:19 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'localpool'
Oct 11 04:25:19 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'mds_autoscaler'
Oct 11 04:25:20 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'mirroring'
Oct 11 04:25:20 compute-0 podman[74744]: 2025-10-11 04:25:20.713470408 +0000 UTC m=+0.071938507 container create c0e5ce50af525246bef2f43c348b232c65fc6deb0e3ab58bbb2f149a47545550 (image=quay.io/ceph/ceph:v18, name=elegant_albattani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:25:20 compute-0 systemd[1]: Started libpod-conmon-c0e5ce50af525246bef2f43c348b232c65fc6deb0e3ab58bbb2f149a47545550.scope.
Oct 11 04:25:20 compute-0 podman[74744]: 2025-10-11 04:25:20.68176781 +0000 UTC m=+0.040235969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:20 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e4f8fa1d955a5c2ddeb98eec104ecb637fd410039fcba224afd05a0ed8fae11/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e4f8fa1d955a5c2ddeb98eec104ecb637fd410039fcba224afd05a0ed8fae11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e4f8fa1d955a5c2ddeb98eec104ecb637fd410039fcba224afd05a0ed8fae11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:20 compute-0 podman[74744]: 2025-10-11 04:25:20.808112271 +0000 UTC m=+0.166580420 container init c0e5ce50af525246bef2f43c348b232c65fc6deb0e3ab58bbb2f149a47545550 (image=quay.io/ceph/ceph:v18, name=elegant_albattani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:25:20 compute-0 podman[74744]: 2025-10-11 04:25:20.818148602 +0000 UTC m=+0.176616711 container start c0e5ce50af525246bef2f43c348b232c65fc6deb0e3ab58bbb2f149a47545550 (image=quay.io/ceph/ceph:v18, name=elegant_albattani, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:25:20 compute-0 podman[74744]: 2025-10-11 04:25:20.822307319 +0000 UTC m=+0.180775428 container attach c0e5ce50af525246bef2f43c348b232c65fc6deb0e3ab58bbb2f149a47545550 (image=quay.io/ceph/ceph:v18, name=elegant_albattani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct 11 04:25:20 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'nfs'
Oct 11 04:25:21 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 04:25:21 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2311836924' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:25:21 compute-0 elegant_albattani[74761]: 
Oct 11 04:25:21 compute-0 elegant_albattani[74761]: {
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     "fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     "health": {
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "status": "HEALTH_OK",
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "checks": {},
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "mutes": []
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     },
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     "election_epoch": 5,
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     "quorum": [
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         0
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     ],
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     "quorum_names": [
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "compute-0"
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     ],
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     "quorum_age": 11,
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     "monmap": {
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "epoch": 1,
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "min_mon_release_name": "reef",
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "num_mons": 1
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     },
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     "osdmap": {
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "epoch": 1,
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "num_osds": 0,
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "num_up_osds": 0,
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "osd_up_since": 0,
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "num_in_osds": 0,
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "osd_in_since": 0,
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "num_remapped_pgs": 0
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     },
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     "pgmap": {
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "pgs_by_state": [],
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "num_pgs": 0,
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "num_pools": 0,
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "num_objects": 0,
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "data_bytes": 0,
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "bytes_used": 0,
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "bytes_avail": 0,
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "bytes_total": 0
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     },
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     "fsmap": {
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "epoch": 1,
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "by_rank": [],
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "up:standby": 0
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     },
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     "mgrmap": {
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "available": false,
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "num_standbys": 0,
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "modules": [
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:             "iostat",
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:             "nfs",
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:             "restful"
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         ],
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "services": {}
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     },
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     "servicemap": {
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "epoch": 1,
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "modified": "2025-10-11T04:25:06.859773+0000",
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:         "services": {}
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     },
Oct 11 04:25:21 compute-0 elegant_albattani[74761]:     "progress_events": {}
Oct 11 04:25:21 compute-0 elegant_albattani[74761]: }
Oct 11 04:25:21 compute-0 systemd[1]: libpod-c0e5ce50af525246bef2f43c348b232c65fc6deb0e3ab58bbb2f149a47545550.scope: Deactivated successfully.
Oct 11 04:25:21 compute-0 podman[74744]: 2025-10-11 04:25:21.21905158 +0000 UTC m=+0.577519659 container died c0e5ce50af525246bef2f43c348b232c65fc6deb0e3ab58bbb2f149a47545550 (image=quay.io/ceph/ceph:v18, name=elegant_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:25:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e4f8fa1d955a5c2ddeb98eec104ecb637fd410039fcba224afd05a0ed8fae11-merged.mount: Deactivated successfully.
Oct 11 04:25:21 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2311836924' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:25:21 compute-0 podman[74744]: 2025-10-11 04:25:21.270175453 +0000 UTC m=+0.628643562 container remove c0e5ce50af525246bef2f43c348b232c65fc6deb0e3ab58bbb2f149a47545550 (image=quay.io/ceph/ceph:v18, name=elegant_albattani, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:25:21 compute-0 systemd[1]: libpod-conmon-c0e5ce50af525246bef2f43c348b232c65fc6deb0e3ab58bbb2f149a47545550.scope: Deactivated successfully.
Oct 11 04:25:21 compute-0 ceph-mgr[74542]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 11 04:25:21 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'orchestrator'
Oct 11 04:25:21 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:21.540+0000 7fe3c18d6140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 11 04:25:22 compute-0 ceph-mgr[74542]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 11 04:25:22 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'osd_perf_query'
Oct 11 04:25:22 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:22.172+0000 7fe3c18d6140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 11 04:25:22 compute-0 ceph-mgr[74542]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 11 04:25:22 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'osd_support'
Oct 11 04:25:22 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:22.419+0000 7fe3c18d6140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 11 04:25:22 compute-0 ceph-mgr[74542]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 11 04:25:22 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'pg_autoscaler'
Oct 11 04:25:22 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:22.640+0000 7fe3c18d6140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 11 04:25:22 compute-0 ceph-mgr[74542]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 11 04:25:22 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'progress'
Oct 11 04:25:22 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:22.891+0000 7fe3c18d6140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 11 04:25:23 compute-0 ceph-mgr[74542]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 11 04:25:23 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'prometheus'
Oct 11 04:25:23 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:23.112+0000 7fe3c18d6140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 11 04:25:23 compute-0 podman[74800]: 2025-10-11 04:25:23.378381474 +0000 UTC m=+0.077704459 container create 4c39a52270b65879d1f1681a16aea3c513c5aa6a66ee167125bb82d31ec53cee (image=quay.io/ceph/ceph:v18, name=suspicious_hypatia, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:25:23 compute-0 podman[74800]: 2025-10-11 04:25:23.341527691 +0000 UTC m=+0.040850736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:23 compute-0 systemd[1]: Started libpod-conmon-4c39a52270b65879d1f1681a16aea3c513c5aa6a66ee167125bb82d31ec53cee.scope.
Oct 11 04:25:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31125fc1392a8352d0b076edd003da19920684b3f3fa0845ef5251038e6114b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31125fc1392a8352d0b076edd003da19920684b3f3fa0845ef5251038e6114b2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31125fc1392a8352d0b076edd003da19920684b3f3fa0845ef5251038e6114b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:23 compute-0 podman[74800]: 2025-10-11 04:25:23.48345321 +0000 UTC m=+0.182776195 container init 4c39a52270b65879d1f1681a16aea3c513c5aa6a66ee167125bb82d31ec53cee (image=quay.io/ceph/ceph:v18, name=suspicious_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Oct 11 04:25:23 compute-0 podman[74800]: 2025-10-11 04:25:23.496627739 +0000 UTC m=+0.195950694 container start 4c39a52270b65879d1f1681a16aea3c513c5aa6a66ee167125bb82d31ec53cee (image=quay.io/ceph/ceph:v18, name=suspicious_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 04:25:23 compute-0 podman[74800]: 2025-10-11 04:25:23.499954562 +0000 UTC m=+0.199277547 container attach 4c39a52270b65879d1f1681a16aea3c513c5aa6a66ee167125bb82d31ec53cee (image=quay.io/ceph/ceph:v18, name=suspicious_hypatia, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:25:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 04:25:23 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2154881261' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]: 
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]: {
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     "fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     "health": {
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "status": "HEALTH_OK",
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "checks": {},
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "mutes": []
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     },
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     "election_epoch": 5,
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     "quorum": [
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         0
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     ],
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     "quorum_names": [
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "compute-0"
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     ],
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     "quorum_age": 14,
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     "monmap": {
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "epoch": 1,
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "min_mon_release_name": "reef",
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "num_mons": 1
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     },
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     "osdmap": {
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "epoch": 1,
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "num_osds": 0,
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "num_up_osds": 0,
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "osd_up_since": 0,
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "num_in_osds": 0,
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "osd_in_since": 0,
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "num_remapped_pgs": 0
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     },
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     "pgmap": {
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "pgs_by_state": [],
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "num_pgs": 0,
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "num_pools": 0,
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "num_objects": 0,
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "data_bytes": 0,
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "bytes_used": 0,
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "bytes_avail": 0,
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "bytes_total": 0
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     },
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     "fsmap": {
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "epoch": 1,
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "by_rank": [],
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "up:standby": 0
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     },
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     "mgrmap": {
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "available": false,
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "num_standbys": 0,
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "modules": [
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:             "iostat",
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:             "nfs",
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:             "restful"
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         ],
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "services": {}
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     },
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     "servicemap": {
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "epoch": 1,
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "modified": "2025-10-11T04:25:06.859773+0000",
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:         "services": {}
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     },
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]:     "progress_events": {}
Oct 11 04:25:23 compute-0 suspicious_hypatia[74817]: }
Oct 11 04:25:23 compute-0 systemd[1]: libpod-4c39a52270b65879d1f1681a16aea3c513c5aa6a66ee167125bb82d31ec53cee.scope: Deactivated successfully.
Oct 11 04:25:23 compute-0 podman[74800]: 2025-10-11 04:25:23.86483429 +0000 UTC m=+0.564157265 container died 4c39a52270b65879d1f1681a16aea3c513c5aa6a66ee167125bb82d31ec53cee (image=quay.io/ceph/ceph:v18, name=suspicious_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:25:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-31125fc1392a8352d0b076edd003da19920684b3f3fa0845ef5251038e6114b2-merged.mount: Deactivated successfully.
Oct 11 04:25:23 compute-0 podman[74800]: 2025-10-11 04:25:23.900051217 +0000 UTC m=+0.599374172 container remove 4c39a52270b65879d1f1681a16aea3c513c5aa6a66ee167125bb82d31ec53cee (image=quay.io/ceph/ceph:v18, name=suspicious_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:25:23 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2154881261' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:25:23 compute-0 systemd[1]: libpod-conmon-4c39a52270b65879d1f1681a16aea3c513c5aa6a66ee167125bb82d31ec53cee.scope: Deactivated successfully.
Oct 11 04:25:24 compute-0 ceph-mgr[74542]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 11 04:25:24 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'rbd_support'
Oct 11 04:25:24 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:24.071+0000 7fe3c18d6140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 11 04:25:24 compute-0 ceph-mgr[74542]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 11 04:25:24 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:24.361+0000 7fe3c18d6140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 11 04:25:24 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'restful'
Oct 11 04:25:25 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'rgw'
Oct 11 04:25:25 compute-0 ceph-mgr[74542]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 11 04:25:25 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:25.724+0000 7fe3c18d6140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 11 04:25:25 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'rook'
Oct 11 04:25:26 compute-0 podman[74854]: 2025-10-11 04:25:26.001559561 +0000 UTC m=+0.074304174 container create c828610acd29a42bd3a7707912280e5f64d2eddcc7f97c16b260c60b068f7e5e (image=quay.io/ceph/ceph:v18, name=recursing_dewdney, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:25:26 compute-0 systemd[1]: Started libpod-conmon-c828610acd29a42bd3a7707912280e5f64d2eddcc7f97c16b260c60b068f7e5e.scope.
Oct 11 04:25:26 compute-0 podman[74854]: 2025-10-11 04:25:25.964260155 +0000 UTC m=+0.037004818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21b2115c17d09eff3be1816a55ab24d8781cdec0209c2c110831f8e2e8a84d42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21b2115c17d09eff3be1816a55ab24d8781cdec0209c2c110831f8e2e8a84d42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21b2115c17d09eff3be1816a55ab24d8781cdec0209c2c110831f8e2e8a84d42/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:26 compute-0 podman[74854]: 2025-10-11 04:25:26.101881553 +0000 UTC m=+0.174626226 container init c828610acd29a42bd3a7707912280e5f64d2eddcc7f97c16b260c60b068f7e5e (image=quay.io/ceph/ceph:v18, name=recursing_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:25:26 compute-0 podman[74854]: 2025-10-11 04:25:26.111922764 +0000 UTC m=+0.184667377 container start c828610acd29a42bd3a7707912280e5f64d2eddcc7f97c16b260c60b068f7e5e (image=quay.io/ceph/ceph:v18, name=recursing_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:25:26 compute-0 podman[74854]: 2025-10-11 04:25:26.11606546 +0000 UTC m=+0.188810083 container attach c828610acd29a42bd3a7707912280e5f64d2eddcc7f97c16b260c60b068f7e5e (image=quay.io/ceph/ceph:v18, name=recursing_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:25:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 04:25:26 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1016864442' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]: 
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]: {
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     "fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     "health": {
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "status": "HEALTH_OK",
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "checks": {},
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "mutes": []
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     },
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     "election_epoch": 5,
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     "quorum": [
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         0
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     ],
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     "quorum_names": [
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "compute-0"
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     ],
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     "quorum_age": 16,
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     "monmap": {
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "epoch": 1,
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "min_mon_release_name": "reef",
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "num_mons": 1
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     },
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     "osdmap": {
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "epoch": 1,
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "num_osds": 0,
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "num_up_osds": 0,
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "osd_up_since": 0,
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "num_in_osds": 0,
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "osd_in_since": 0,
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "num_remapped_pgs": 0
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     },
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     "pgmap": {
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "pgs_by_state": [],
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "num_pgs": 0,
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "num_pools": 0,
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "num_objects": 0,
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "data_bytes": 0,
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "bytes_used": 0,
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "bytes_avail": 0,
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "bytes_total": 0
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     },
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     "fsmap": {
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "epoch": 1,
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "by_rank": [],
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "up:standby": 0
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     },
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     "mgrmap": {
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "available": false,
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "num_standbys": 0,
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "modules": [
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:             "iostat",
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:             "nfs",
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:             "restful"
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         ],
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "services": {}
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     },
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     "servicemap": {
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "epoch": 1,
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "modified": "2025-10-11T04:25:06.859773+0000",
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:         "services": {}
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     },
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]:     "progress_events": {}
Oct 11 04:25:26 compute-0 recursing_dewdney[74870]: }
Oct 11 04:25:26 compute-0 systemd[1]: libpod-c828610acd29a42bd3a7707912280e5f64d2eddcc7f97c16b260c60b068f7e5e.scope: Deactivated successfully.
Oct 11 04:25:26 compute-0 podman[74854]: 2025-10-11 04:25:26.552086602 +0000 UTC m=+0.624831245 container died c828610acd29a42bd3a7707912280e5f64d2eddcc7f97c16b260c60b068f7e5e (image=quay.io/ceph/ceph:v18, name=recursing_dewdney, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:25:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-21b2115c17d09eff3be1816a55ab24d8781cdec0209c2c110831f8e2e8a84d42-merged.mount: Deactivated successfully.
Oct 11 04:25:26 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1016864442' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:25:26 compute-0 podman[74854]: 2025-10-11 04:25:26.606152118 +0000 UTC m=+0.678896741 container remove c828610acd29a42bd3a7707912280e5f64d2eddcc7f97c16b260c60b068f7e5e (image=quay.io/ceph/ceph:v18, name=recursing_dewdney, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:25:26 compute-0 systemd[1]: libpod-conmon-c828610acd29a42bd3a7707912280e5f64d2eddcc7f97c16b260c60b068f7e5e.scope: Deactivated successfully.
Oct 11 04:25:27 compute-0 ceph-mgr[74542]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 11 04:25:27 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'selftest'
Oct 11 04:25:27 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:27.765+0000 7fe3c18d6140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 11 04:25:28 compute-0 ceph-mgr[74542]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 11 04:25:28 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'snap_schedule'
Oct 11 04:25:28 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:27.999+0000 7fe3c18d6140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 11 04:25:28 compute-0 ceph-mgr[74542]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 11 04:25:28 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'stats'
Oct 11 04:25:28 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:28.241+0000 7fe3c18d6140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 11 04:25:28 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'status'
Oct 11 04:25:28 compute-0 podman[74911]: 2025-10-11 04:25:28.703791664 +0000 UTC m=+0.072233225 container create 4f3323628dd12842bfab0522fdc34b9f9dada4598d941584d31e5894f03d35e8 (image=quay.io/ceph/ceph:v18, name=vigilant_lumiere, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:25:28 compute-0 ceph-mgr[74542]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 11 04:25:28 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'telegraf'
Oct 11 04:25:28 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:28.717+0000 7fe3c18d6140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 11 04:25:28 compute-0 systemd[1]: Started libpod-conmon-4f3323628dd12842bfab0522fdc34b9f9dada4598d941584d31e5894f03d35e8.scope.
Oct 11 04:25:28 compute-0 podman[74911]: 2025-10-11 04:25:28.672429945 +0000 UTC m=+0.040871546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2b500884dd6bbe7e755ce366e7394bfcefbb88be159efc7642c62e771677673/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2b500884dd6bbe7e755ce366e7394bfcefbb88be159efc7642c62e771677673/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2b500884dd6bbe7e755ce366e7394bfcefbb88be159efc7642c62e771677673/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:28 compute-0 podman[74911]: 2025-10-11 04:25:28.811821952 +0000 UTC m=+0.180263543 container init 4f3323628dd12842bfab0522fdc34b9f9dada4598d941584d31e5894f03d35e8 (image=quay.io/ceph/ceph:v18, name=vigilant_lumiere, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:25:28 compute-0 podman[74911]: 2025-10-11 04:25:28.821598656 +0000 UTC m=+0.190040217 container start 4f3323628dd12842bfab0522fdc34b9f9dada4598d941584d31e5894f03d35e8 (image=quay.io/ceph/ceph:v18, name=vigilant_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:25:28 compute-0 podman[74911]: 2025-10-11 04:25:28.825959539 +0000 UTC m=+0.194401130 container attach 4f3323628dd12842bfab0522fdc34b9f9dada4598d941584d31e5894f03d35e8 (image=quay.io/ceph/ceph:v18, name=vigilant_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 11 04:25:28 compute-0 ceph-mgr[74542]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 11 04:25:28 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'telemetry'
Oct 11 04:25:28 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:28.971+0000 7fe3c18d6140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 11 04:25:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 04:25:29 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/988975819' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]: 
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]: {
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     "fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     "health": {
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "status": "HEALTH_OK",
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "checks": {},
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "mutes": []
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     },
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     "election_epoch": 5,
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     "quorum": [
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         0
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     ],
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     "quorum_names": [
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "compute-0"
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     ],
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     "quorum_age": 19,
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     "monmap": {
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "epoch": 1,
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "min_mon_release_name": "reef",
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "num_mons": 1
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     },
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     "osdmap": {
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "epoch": 1,
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "num_osds": 0,
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "num_up_osds": 0,
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "osd_up_since": 0,
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "num_in_osds": 0,
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "osd_in_since": 0,
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "num_remapped_pgs": 0
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     },
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     "pgmap": {
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "pgs_by_state": [],
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "num_pgs": 0,
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "num_pools": 0,
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "num_objects": 0,
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "data_bytes": 0,
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "bytes_used": 0,
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "bytes_avail": 0,
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "bytes_total": 0
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     },
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     "fsmap": {
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "epoch": 1,
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "by_rank": [],
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "up:standby": 0
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     },
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     "mgrmap": {
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "available": false,
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "num_standbys": 0,
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "modules": [
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:             "iostat",
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:             "nfs",
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:             "restful"
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         ],
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "services": {}
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     },
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     "servicemap": {
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "epoch": 1,
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "modified": "2025-10-11T04:25:06.859773+0000",
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:         "services": {}
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     },
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]:     "progress_events": {}
Oct 11 04:25:29 compute-0 vigilant_lumiere[74928]: }
Oct 11 04:25:29 compute-0 systemd[1]: libpod-4f3323628dd12842bfab0522fdc34b9f9dada4598d941584d31e5894f03d35e8.scope: Deactivated successfully.
Oct 11 04:25:29 compute-0 podman[74911]: 2025-10-11 04:25:29.25914129 +0000 UTC m=+0.627582821 container died 4f3323628dd12842bfab0522fdc34b9f9dada4598d941584d31e5894f03d35e8 (image=quay.io/ceph/ceph:v18, name=vigilant_lumiere, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:25:29 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/988975819' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:25:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2b500884dd6bbe7e755ce366e7394bfcefbb88be159efc7642c62e771677673-merged.mount: Deactivated successfully.
Oct 11 04:25:29 compute-0 podman[74911]: 2025-10-11 04:25:29.480121814 +0000 UTC m=+0.848563365 container remove 4f3323628dd12842bfab0522fdc34b9f9dada4598d941584d31e5894f03d35e8 (image=quay.io/ceph/ceph:v18, name=vigilant_lumiere, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 04:25:29 compute-0 systemd[1]: libpod-conmon-4f3323628dd12842bfab0522fdc34b9f9dada4598d941584d31e5894f03d35e8.scope: Deactivated successfully.
Oct 11 04:25:29 compute-0 ceph-mgr[74542]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 11 04:25:29 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'test_orchestrator'
Oct 11 04:25:29 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:29.583+0000 7fe3c18d6140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 11 04:25:30 compute-0 ceph-mgr[74542]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 11 04:25:30 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'volumes'
Oct 11 04:25:30 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:30.206+0000 7fe3c18d6140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 11 04:25:30 compute-0 ceph-mgr[74542]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 11 04:25:30 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'zabbix'
Oct 11 04:25:30 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:30.869+0000 7fe3c18d6140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 11 04:25:31 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:31.096+0000 7fe3c18d6140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: ms_deliver_dispatch: unhandled message 0x560bd15991e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct 11 04:25:31 compute-0 ceph-mon[74243]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.phooxi
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: mgr handle_mgr_map Activating!
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: mgr handle_mgr_map I am now activating
Oct 11 04:25:31 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.phooxi(active, starting, since 0.0125904s)
Oct 11 04:25:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct 11 04:25:31 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1225876701' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 11 04:25:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).mds e1 all = 1
Oct 11 04:25:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 11 04:25:31 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1225876701' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 11 04:25:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct 11 04:25:31 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1225876701' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 11 04:25:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 11 04:25:31 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1225876701' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 11 04:25:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.phooxi", "id": "compute-0.phooxi"} v 0) v1
Oct 11 04:25:31 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1225876701' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mgr metadata", "who": "compute-0.phooxi", "id": "compute-0.phooxi"}]: dispatch
Oct 11 04:25:31 compute-0 ceph-mon[74243]: log_channel(cluster) log [INF] : Manager daemon compute-0.phooxi is now available
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: balancer
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [balancer INFO root] Starting
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: crash
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:25:31
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [balancer INFO root] No pools available
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: devicehealth
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: iostat
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [devicehealth INFO root] Starting
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: nfs
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: orchestrator
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: pg_autoscaler
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: progress
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [progress INFO root] Loading...
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [progress INFO root] No stored events to load
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [progress INFO root] Loaded [] historic events
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [progress INFO root] Loaded OSDMap, ready.
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [rbd_support INFO root] recovery thread starting
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [rbd_support INFO root] starting setup
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: rbd_support
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: restful
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [restful INFO root] server_addr: :: server_port: 8003
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: status
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [restful WARNING root] server not running: no certificate configured
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: telemetry
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Oct 11 04:25:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.phooxi/mirror_snapshot_schedule"} v 0) v1
Oct 11 04:25:31 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1225876701' entity='mgr.compute-0.phooxi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.phooxi/mirror_snapshot_schedule"}]: dispatch
Oct 11 04:25:31 compute-0 ceph-mon[74243]: Activating manager daemon compute-0.phooxi
Oct 11 04:25:31 compute-0 ceph-mon[74243]: mgrmap e2: compute-0.phooxi(active, starting, since 0.0125904s)
Oct 11 04:25:31 compute-0 ceph-mon[74243]: from='mgr.14102 192.168.122.100:0/1225876701' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 11 04:25:31 compute-0 ceph-mon[74243]: from='mgr.14102 192.168.122.100:0/1225876701' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 11 04:25:31 compute-0 ceph-mon[74243]: from='mgr.14102 192.168.122.100:0/1225876701' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 11 04:25:31 compute-0 ceph-mon[74243]: from='mgr.14102 192.168.122.100:0/1225876701' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 11 04:25:31 compute-0 ceph-mon[74243]: from='mgr.14102 192.168.122.100:0/1225876701' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mgr metadata", "who": "compute-0.phooxi", "id": "compute-0.phooxi"}]: dispatch
Oct 11 04:25:31 compute-0 ceph-mon[74243]: Manager daemon compute-0.phooxi is now available
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: volumes
Oct 11 04:25:31 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1225876701' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:25:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [rbd_support INFO root] PerfHandler: starting
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TaskHandler: starting
Oct 11 04:25:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.phooxi/trash_purge_schedule"} v 0) v1
Oct 11 04:25:31 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1225876701' entity='mgr.compute-0.phooxi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.phooxi/trash_purge_schedule"}]: dispatch
Oct 11 04:25:31 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1225876701' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 11 04:25:31 compute-0 ceph-mgr[74542]: [rbd_support INFO root] setup complete
Oct 11 04:25:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Oct 11 04:25:31 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1225876701' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:31 compute-0 podman[75045]: 2025-10-11 04:25:31.539858648 +0000 UTC m=+0.034586190 container create 735e8a962028cd7421ecbeeebd65762117f5b3d947305fe30772d1d8edbf392c (image=quay.io/ceph/ceph:v18, name=funny_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:25:31 compute-0 systemd[1]: Started libpod-conmon-735e8a962028cd7421ecbeeebd65762117f5b3d947305fe30772d1d8edbf392c.scope.
Oct 11 04:25:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77f9add3d11858873b89882e78987c356634bc474ffe8db75dd49e149846c4b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77f9add3d11858873b89882e78987c356634bc474ffe8db75dd49e149846c4b2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77f9add3d11858873b89882e78987c356634bc474ffe8db75dd49e149846c4b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:31 compute-0 podman[75045]: 2025-10-11 04:25:31.523826989 +0000 UTC m=+0.018554531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:31 compute-0 podman[75045]: 2025-10-11 04:25:31.622558576 +0000 UTC m=+0.117286138 container init 735e8a962028cd7421ecbeeebd65762117f5b3d947305fe30772d1d8edbf392c (image=quay.io/ceph/ceph:v18, name=funny_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:25:31 compute-0 podman[75045]: 2025-10-11 04:25:31.637150055 +0000 UTC m=+0.131877557 container start 735e8a962028cd7421ecbeeebd65762117f5b3d947305fe30772d1d8edbf392c (image=quay.io/ceph/ceph:v18, name=funny_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Oct 11 04:25:31 compute-0 podman[75045]: 2025-10-11 04:25:31.640457698 +0000 UTC m=+0.135185300 container attach 735e8a962028cd7421ecbeeebd65762117f5b3d947305fe30772d1d8edbf392c (image=quay.io/ceph/ceph:v18, name=funny_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:25:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 04:25:31 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3351693859' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:25:31 compute-0 funny_hellman[75061]: 
Oct 11 04:25:31 compute-0 funny_hellman[75061]: {
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     "fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     "health": {
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "status": "HEALTH_OK",
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "checks": {},
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "mutes": []
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     },
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     "election_epoch": 5,
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     "quorum": [
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         0
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     ],
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     "quorum_names": [
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "compute-0"
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     ],
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     "quorum_age": 22,
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     "monmap": {
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "epoch": 1,
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "min_mon_release_name": "reef",
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "num_mons": 1
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     },
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     "osdmap": {
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "epoch": 1,
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "num_osds": 0,
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "num_up_osds": 0,
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "osd_up_since": 0,
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "num_in_osds": 0,
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "osd_in_since": 0,
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "num_remapped_pgs": 0
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     },
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     "pgmap": {
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "pgs_by_state": [],
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "num_pgs": 0,
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "num_pools": 0,
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "num_objects": 0,
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "data_bytes": 0,
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "bytes_used": 0,
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "bytes_avail": 0,
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "bytes_total": 0
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     },
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     "fsmap": {
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "epoch": 1,
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "by_rank": [],
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "up:standby": 0
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     },
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     "mgrmap": {
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "available": false,
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "num_standbys": 0,
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "modules": [
Oct 11 04:25:31 compute-0 funny_hellman[75061]:             "iostat",
Oct 11 04:25:31 compute-0 funny_hellman[75061]:             "nfs",
Oct 11 04:25:31 compute-0 funny_hellman[75061]:             "restful"
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         ],
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "services": {}
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     },
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     "servicemap": {
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "epoch": 1,
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "modified": "2025-10-11T04:25:06.859773+0000",
Oct 11 04:25:31 compute-0 funny_hellman[75061]:         "services": {}
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     },
Oct 11 04:25:31 compute-0 funny_hellman[75061]:     "progress_events": {}
Oct 11 04:25:31 compute-0 funny_hellman[75061]: }
Oct 11 04:25:31 compute-0 systemd[1]: libpod-735e8a962028cd7421ecbeeebd65762117f5b3d947305fe30772d1d8edbf392c.scope: Deactivated successfully.
Oct 11 04:25:32 compute-0 podman[75087]: 2025-10-11 04:25:32.056686275 +0000 UTC m=+0.036895505 container died 735e8a962028cd7421ecbeeebd65762117f5b3d947305fe30772d1d8edbf392c (image=quay.io/ceph/ceph:v18, name=funny_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:25:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-77f9add3d11858873b89882e78987c356634bc474ffe8db75dd49e149846c4b2-merged.mount: Deactivated successfully.
Oct 11 04:25:32 compute-0 podman[75087]: 2025-10-11 04:25:32.096292695 +0000 UTC m=+0.076501905 container remove 735e8a962028cd7421ecbeeebd65762117f5b3d947305fe30772d1d8edbf392c (image=quay.io/ceph/ceph:v18, name=funny_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:25:32 compute-0 systemd[1]: libpod-conmon-735e8a962028cd7421ecbeeebd65762117f5b3d947305fe30772d1d8edbf392c.scope: Deactivated successfully.
Oct 11 04:25:32 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.phooxi(active, since 1.02292s)
Oct 11 04:25:32 compute-0 ceph-mon[74243]: from='mgr.14102 192.168.122.100:0/1225876701' entity='mgr.compute-0.phooxi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.phooxi/mirror_snapshot_schedule"}]: dispatch
Oct 11 04:25:32 compute-0 ceph-mon[74243]: from='mgr.14102 192.168.122.100:0/1225876701' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:32 compute-0 ceph-mon[74243]: from='mgr.14102 192.168.122.100:0/1225876701' entity='mgr.compute-0.phooxi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.phooxi/trash_purge_schedule"}]: dispatch
Oct 11 04:25:32 compute-0 ceph-mon[74243]: from='mgr.14102 192.168.122.100:0/1225876701' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:32 compute-0 ceph-mon[74243]: from='mgr.14102 192.168.122.100:0/1225876701' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:32 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3351693859' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:25:32 compute-0 ceph-mon[74243]: mgrmap e3: compute-0.phooxi(active, since 1.02292s)
Oct 11 04:25:33 compute-0 ceph-mgr[74542]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:25:33 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.phooxi(active, since 2s)
Oct 11 04:25:34 compute-0 podman[75102]: 2025-10-11 04:25:34.158970531 +0000 UTC m=+0.035163657 container create 2d7f61102c9ea930abf19a7e2e8a62eeb71d88c1f675dd7d714050cae197f42e (image=quay.io/ceph/ceph:v18, name=gracious_poincare, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:25:34 compute-0 ceph-mon[74243]: mgrmap e4: compute-0.phooxi(active, since 2s)
Oct 11 04:25:34 compute-0 systemd[1]: Started libpod-conmon-2d7f61102c9ea930abf19a7e2e8a62eeb71d88c1f675dd7d714050cae197f42e.scope.
Oct 11 04:25:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb92607ec444391645fc8c5178338e525d93d413e22fa9692e785d624a548923/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb92607ec444391645fc8c5178338e525d93d413e22fa9692e785d624a548923/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb92607ec444391645fc8c5178338e525d93d413e22fa9692e785d624a548923/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:34 compute-0 podman[75102]: 2025-10-11 04:25:34.214443496 +0000 UTC m=+0.090636632 container init 2d7f61102c9ea930abf19a7e2e8a62eeb71d88c1f675dd7d714050cae197f42e (image=quay.io/ceph/ceph:v18, name=gracious_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:25:34 compute-0 podman[75102]: 2025-10-11 04:25:34.220061623 +0000 UTC m=+0.096254739 container start 2d7f61102c9ea930abf19a7e2e8a62eeb71d88c1f675dd7d714050cae197f42e (image=quay.io/ceph/ceph:v18, name=gracious_poincare, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:25:34 compute-0 podman[75102]: 2025-10-11 04:25:34.222772599 +0000 UTC m=+0.098965765 container attach 2d7f61102c9ea930abf19a7e2e8a62eeb71d88c1f675dd7d714050cae197f42e (image=quay.io/ceph/ceph:v18, name=gracious_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:25:34 compute-0 podman[75102]: 2025-10-11 04:25:34.143801266 +0000 UTC m=+0.019994402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 04:25:34 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4118881473' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:25:34 compute-0 gracious_poincare[75119]: 
Oct 11 04:25:34 compute-0 gracious_poincare[75119]: {
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     "fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     "health": {
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "status": "HEALTH_OK",
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "checks": {},
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "mutes": []
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     },
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     "election_epoch": 5,
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     "quorum": [
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         0
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     ],
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     "quorum_names": [
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "compute-0"
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     ],
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     "quorum_age": 25,
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     "monmap": {
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "epoch": 1,
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "min_mon_release_name": "reef",
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "num_mons": 1
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     },
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     "osdmap": {
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "epoch": 1,
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "num_osds": 0,
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "num_up_osds": 0,
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "osd_up_since": 0,
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "num_in_osds": 0,
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "osd_in_since": 0,
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "num_remapped_pgs": 0
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     },
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     "pgmap": {
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "pgs_by_state": [],
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "num_pgs": 0,
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "num_pools": 0,
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "num_objects": 0,
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "data_bytes": 0,
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "bytes_used": 0,
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "bytes_avail": 0,
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "bytes_total": 0
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     },
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     "fsmap": {
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "epoch": 1,
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "by_rank": [],
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "up:standby": 0
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     },
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     "mgrmap": {
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "available": true,
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "num_standbys": 0,
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "modules": [
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:             "iostat",
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:             "nfs",
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:             "restful"
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         ],
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "services": {}
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     },
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     "servicemap": {
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "epoch": 1,
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "modified": "2025-10-11T04:25:06.859773+0000",
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:         "services": {}
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     },
Oct 11 04:25:34 compute-0 gracious_poincare[75119]:     "progress_events": {}
Oct 11 04:25:34 compute-0 gracious_poincare[75119]: }
Oct 11 04:25:34 compute-0 systemd[1]: libpod-2d7f61102c9ea930abf19a7e2e8a62eeb71d88c1f675dd7d714050cae197f42e.scope: Deactivated successfully.
Oct 11 04:25:34 compute-0 podman[75102]: 2025-10-11 04:25:34.79541714 +0000 UTC m=+0.671610296 container died 2d7f61102c9ea930abf19a7e2e8a62eeb71d88c1f675dd7d714050cae197f42e (image=quay.io/ceph/ceph:v18, name=gracious_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:25:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb92607ec444391645fc8c5178338e525d93d413e22fa9692e785d624a548923-merged.mount: Deactivated successfully.
Oct 11 04:25:35 compute-0 podman[75102]: 2025-10-11 04:25:35.007370521 +0000 UTC m=+0.883563647 container remove 2d7f61102c9ea930abf19a7e2e8a62eeb71d88c1f675dd7d714050cae197f42e (image=quay.io/ceph/ceph:v18, name=gracious_poincare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:25:35 compute-0 systemd[1]: libpod-conmon-2d7f61102c9ea930abf19a7e2e8a62eeb71d88c1f675dd7d714050cae197f42e.scope: Deactivated successfully.
Oct 11 04:25:35 compute-0 podman[75158]: 2025-10-11 04:25:35.071256182 +0000 UTC m=+0.045054994 container create c5724a7247818d300b5cdada75d9d508ee31b72d08b81344a8410308a3508ea6 (image=quay.io/ceph/ceph:v18, name=elegant_kapitsa, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:25:35 compute-0 systemd[1]: Started libpod-conmon-c5724a7247818d300b5cdada75d9d508ee31b72d08b81344a8410308a3508ea6.scope.
Oct 11 04:25:35 compute-0 ceph-mgr[74542]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:25:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48871e3fa45dd14b6bacfdf6e00dd26f4d77ce3ae1a530695537fa9b98af3a97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48871e3fa45dd14b6bacfdf6e00dd26f4d77ce3ae1a530695537fa9b98af3a97/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48871e3fa45dd14b6bacfdf6e00dd26f4d77ce3ae1a530695537fa9b98af3a97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48871e3fa45dd14b6bacfdf6e00dd26f4d77ce3ae1a530695537fa9b98af3a97/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:35 compute-0 podman[75158]: 2025-10-11 04:25:35.047959289 +0000 UTC m=+0.021758101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:35 compute-0 podman[75158]: 2025-10-11 04:25:35.146491821 +0000 UTC m=+0.120290713 container init c5724a7247818d300b5cdada75d9d508ee31b72d08b81344a8410308a3508ea6 (image=quay.io/ceph/ceph:v18, name=elegant_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:25:35 compute-0 podman[75158]: 2025-10-11 04:25:35.155933996 +0000 UTC m=+0.129732788 container start c5724a7247818d300b5cdada75d9d508ee31b72d08b81344a8410308a3508ea6 (image=quay.io/ceph/ceph:v18, name=elegant_kapitsa, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:25:35 compute-0 podman[75158]: 2025-10-11 04:25:35.16285388 +0000 UTC m=+0.136652782 container attach c5724a7247818d300b5cdada75d9d508ee31b72d08b81344a8410308a3508ea6 (image=quay.io/ceph/ceph:v18, name=elegant_kapitsa, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:25:35 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4118881473' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:25:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 11 04:25:35 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/965606818' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 11 04:25:35 compute-0 systemd[1]: libpod-c5724a7247818d300b5cdada75d9d508ee31b72d08b81344a8410308a3508ea6.scope: Deactivated successfully.
Oct 11 04:25:35 compute-0 podman[75158]: 2025-10-11 04:25:35.664282525 +0000 UTC m=+0.638081327 container died c5724a7247818d300b5cdada75d9d508ee31b72d08b81344a8410308a3508ea6 (image=quay.io/ceph/ceph:v18, name=elegant_kapitsa, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:25:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-48871e3fa45dd14b6bacfdf6e00dd26f4d77ce3ae1a530695537fa9b98af3a97-merged.mount: Deactivated successfully.
Oct 11 04:25:35 compute-0 podman[75158]: 2025-10-11 04:25:35.721309313 +0000 UTC m=+0.695108115 container remove c5724a7247818d300b5cdada75d9d508ee31b72d08b81344a8410308a3508ea6 (image=quay.io/ceph/ceph:v18, name=elegant_kapitsa, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:25:35 compute-0 systemd[1]: libpod-conmon-c5724a7247818d300b5cdada75d9d508ee31b72d08b81344a8410308a3508ea6.scope: Deactivated successfully.
Oct 11 04:25:35 compute-0 podman[75213]: 2025-10-11 04:25:35.794791363 +0000 UTC m=+0.052805051 container create 902857f52e00da0ebfc92b44d9e286495f9de8ce1f52cc867dce674add30a515 (image=quay.io/ceph/ceph:v18, name=infallible_mendel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:25:35 compute-0 systemd[1]: Started libpod-conmon-902857f52e00da0ebfc92b44d9e286495f9de8ce1f52cc867dce674add30a515.scope.
Oct 11 04:25:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214a327b52fba034bc6851b1d8e24a651028406a57be91240b0c1c01bdaabd40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214a327b52fba034bc6851b1d8e24a651028406a57be91240b0c1c01bdaabd40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214a327b52fba034bc6851b1d8e24a651028406a57be91240b0c1c01bdaabd40/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:35 compute-0 podman[75213]: 2025-10-11 04:25:35.860732161 +0000 UTC m=+0.118745869 container init 902857f52e00da0ebfc92b44d9e286495f9de8ce1f52cc867dce674add30a515 (image=quay.io/ceph/ceph:v18, name=infallible_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:25:35 compute-0 podman[75213]: 2025-10-11 04:25:35.766887751 +0000 UTC m=+0.024901479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:35 compute-0 podman[75213]: 2025-10-11 04:25:35.865162615 +0000 UTC m=+0.123176313 container start 902857f52e00da0ebfc92b44d9e286495f9de8ce1f52cc867dce674add30a515 (image=quay.io/ceph/ceph:v18, name=infallible_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:25:35 compute-0 podman[75213]: 2025-10-11 04:25:35.869855887 +0000 UTC m=+0.127869595 container attach 902857f52e00da0ebfc92b44d9e286495f9de8ce1f52cc867dce674add30a515 (image=quay.io/ceph/ceph:v18, name=infallible_mendel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:25:36 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/965606818' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 11 04:25:36 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Oct 11 04:25:36 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4121513595' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 11 04:25:37 compute-0 ceph-mgr[74542]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:25:37 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4121513595' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 11 04:25:37 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4121513595' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 11 04:25:37 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.phooxi(active, since 6s)
Oct 11 04:25:37 compute-0 ceph-mgr[74542]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 11 04:25:37 compute-0 ceph-mgr[74542]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 11 04:25:37 compute-0 ceph-mgr[74542]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 11 04:25:37 compute-0 ceph-mgr[74542]: mgr respawn  1: '-n'
Oct 11 04:25:37 compute-0 ceph-mgr[74542]: mgr respawn  2: 'mgr.compute-0.phooxi'
Oct 11 04:25:37 compute-0 ceph-mgr[74542]: mgr respawn  3: '-f'
Oct 11 04:25:37 compute-0 ceph-mgr[74542]: mgr respawn  4: '--setuser'
Oct 11 04:25:37 compute-0 systemd[1]: libpod-902857f52e00da0ebfc92b44d9e286495f9de8ce1f52cc867dce674add30a515.scope: Deactivated successfully.
Oct 11 04:25:37 compute-0 podman[75256]: 2025-10-11 04:25:37.285964884 +0000 UTC m=+0.025146375 container died 902857f52e00da0ebfc92b44d9e286495f9de8ce1f52cc867dce674add30a515 (image=quay.io/ceph/ceph:v18, name=infallible_mendel, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:25:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-214a327b52fba034bc6851b1d8e24a651028406a57be91240b0c1c01bdaabd40-merged.mount: Deactivated successfully.
Oct 11 04:25:37 compute-0 podman[75256]: 2025-10-11 04:25:37.332653949 +0000 UTC m=+0.071835410 container remove 902857f52e00da0ebfc92b44d9e286495f9de8ce1f52cc867dce674add30a515 (image=quay.io/ceph/ceph:v18, name=infallible_mendel, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:25:37 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: ignoring --setuser ceph since I am not root
Oct 11 04:25:37 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: ignoring --setgroup ceph since I am not root
Oct 11 04:25:37 compute-0 systemd[1]: libpod-conmon-902857f52e00da0ebfc92b44d9e286495f9de8ce1f52cc867dce674add30a515.scope: Deactivated successfully.
Oct 11 04:25:37 compute-0 ceph-mgr[74542]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 11 04:25:37 compute-0 ceph-mgr[74542]: pidfile_write: ignore empty --pid-file
Oct 11 04:25:37 compute-0 podman[75294]: 2025-10-11 04:25:37.415259708 +0000 UTC m=+0.052621706 container create 45f604a8544c4c5a1450b58bb77328c57b7530e12ae6461997662efede71c9d7 (image=quay.io/ceph/ceph:v18, name=trusting_kapitsa, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:25:37 compute-0 systemd[1]: Started libpod-conmon-45f604a8544c4c5a1450b58bb77328c57b7530e12ae6461997662efede71c9d7.scope.
Oct 11 04:25:37 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'alerts'
Oct 11 04:25:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4bc2b9bd2cdf32d12b34ab514d125862a9dae164790acb25199bbd69fa39ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4bc2b9bd2cdf32d12b34ab514d125862a9dae164790acb25199bbd69fa39ce/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4bc2b9bd2cdf32d12b34ab514d125862a9dae164790acb25199bbd69fa39ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:37 compute-0 podman[75294]: 2025-10-11 04:25:37.389522372 +0000 UTC m=+0.026884380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:37 compute-0 podman[75294]: 2025-10-11 04:25:37.495712927 +0000 UTC m=+0.133075005 container init 45f604a8544c4c5a1450b58bb77328c57b7530e12ae6461997662efede71c9d7 (image=quay.io/ceph/ceph:v18, name=trusting_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:25:37 compute-0 podman[75294]: 2025-10-11 04:25:37.505636276 +0000 UTC m=+0.142998264 container start 45f604a8544c4c5a1450b58bb77328c57b7530e12ae6461997662efede71c9d7 (image=quay.io/ceph/ceph:v18, name=trusting_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:25:37 compute-0 podman[75294]: 2025-10-11 04:25:37.513062042 +0000 UTC m=+0.150424110 container attach 45f604a8544c4c5a1450b58bb77328c57b7530e12ae6461997662efede71c9d7 (image=quay.io/ceph/ceph:v18, name=trusting_kapitsa, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:25:37 compute-0 ceph-mgr[74542]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 11 04:25:37 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'balancer'
Oct 11 04:25:37 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:37.792+0000 7fe65306d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 11 04:25:38 compute-0 ceph-mgr[74542]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 11 04:25:38 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'cephadm'
Oct 11 04:25:38 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:38.054+0000 7fe65306d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 11 04:25:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 11 04:25:38 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2914383139' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 11 04:25:38 compute-0 trusting_kapitsa[75313]: {
Oct 11 04:25:38 compute-0 trusting_kapitsa[75313]:     "epoch": 5,
Oct 11 04:25:38 compute-0 trusting_kapitsa[75313]:     "available": true,
Oct 11 04:25:38 compute-0 trusting_kapitsa[75313]:     "active_name": "compute-0.phooxi",
Oct 11 04:25:38 compute-0 trusting_kapitsa[75313]:     "num_standby": 0
Oct 11 04:25:38 compute-0 trusting_kapitsa[75313]: }
Oct 11 04:25:38 compute-0 systemd[1]: libpod-45f604a8544c4c5a1450b58bb77328c57b7530e12ae6461997662efede71c9d7.scope: Deactivated successfully.
Oct 11 04:25:38 compute-0 podman[75294]: 2025-10-11 04:25:38.079941033 +0000 UTC m=+0.717302991 container died 45f604a8544c4c5a1450b58bb77328c57b7530e12ae6461997662efede71c9d7 (image=quay.io/ceph/ceph:v18, name=trusting_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:25:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd4bc2b9bd2cdf32d12b34ab514d125862a9dae164790acb25199bbd69fa39ce-merged.mount: Deactivated successfully.
Oct 11 04:25:38 compute-0 podman[75294]: 2025-10-11 04:25:38.121531699 +0000 UTC m=+0.758893657 container remove 45f604a8544c4c5a1450b58bb77328c57b7530e12ae6461997662efede71c9d7 (image=quay.io/ceph/ceph:v18, name=trusting_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:25:38 compute-0 systemd[1]: libpod-conmon-45f604a8544c4c5a1450b58bb77328c57b7530e12ae6461997662efede71c9d7.scope: Deactivated successfully.
Oct 11 04:25:38 compute-0 podman[75351]: 2025-10-11 04:25:38.194817691 +0000 UTC m=+0.045520932 container create 08023db5bef41cbdf309c8421d18d4d25b7cf53cab9edd73e88bef7c655e32ca (image=quay.io/ceph/ceph:v18, name=nostalgic_galileo, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 04:25:38 compute-0 systemd[1]: Started libpod-conmon-08023db5bef41cbdf309c8421d18d4d25b7cf53cab9edd73e88bef7c655e32ca.scope.
Oct 11 04:25:38 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4121513595' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 11 04:25:38 compute-0 ceph-mon[74243]: mgrmap e5: compute-0.phooxi(active, since 6s)
Oct 11 04:25:38 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2914383139' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 11 04:25:38 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07b1c74f4add87e4191bab4e3c01da7928f50b6560104f165b480c36a989eae/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07b1c74f4add87e4191bab4e3c01da7928f50b6560104f165b480c36a989eae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07b1c74f4add87e4191bab4e3c01da7928f50b6560104f165b480c36a989eae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:38 compute-0 podman[75351]: 2025-10-11 04:25:38.267870676 +0000 UTC m=+0.118573937 container init 08023db5bef41cbdf309c8421d18d4d25b7cf53cab9edd73e88bef7c655e32ca (image=quay.io/ceph/ceph:v18, name=nostalgic_galileo, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:25:38 compute-0 podman[75351]: 2025-10-11 04:25:38.179072406 +0000 UTC m=+0.029775667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:38 compute-0 podman[75351]: 2025-10-11 04:25:38.277758733 +0000 UTC m=+0.128462014 container start 08023db5bef41cbdf309c8421d18d4d25b7cf53cab9edd73e88bef7c655e32ca (image=quay.io/ceph/ceph:v18, name=nostalgic_galileo, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:25:38 compute-0 podman[75351]: 2025-10-11 04:25:38.281863535 +0000 UTC m=+0.132566816 container attach 08023db5bef41cbdf309c8421d18d4d25b7cf53cab9edd73e88bef7c655e32ca (image=quay.io/ceph/ceph:v18, name=nostalgic_galileo, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:25:39 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'crash'
Oct 11 04:25:40 compute-0 ceph-mgr[74542]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 11 04:25:40 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'dashboard'
Oct 11 04:25:40 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:40.134+0000 7fe65306d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 11 04:25:41 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'devicehealth'
Oct 11 04:25:41 compute-0 ceph-mgr[74542]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 11 04:25:41 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'diskprediction_local'
Oct 11 04:25:41 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:41.795+0000 7fe65306d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 11 04:25:42 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 11 04:25:42 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 11 04:25:42 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]:   from numpy import show_config as show_numpy_config
Oct 11 04:25:42 compute-0 ceph-mgr[74542]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 11 04:25:42 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'influx'
Oct 11 04:25:42 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:42.297+0000 7fe65306d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 11 04:25:42 compute-0 ceph-mgr[74542]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 11 04:25:42 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'insights'
Oct 11 04:25:42 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:42.522+0000 7fe65306d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 11 04:25:42 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'iostat'
Oct 11 04:25:42 compute-0 ceph-mgr[74542]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 11 04:25:42 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'k8sevents'
Oct 11 04:25:42 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:42.962+0000 7fe65306d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 11 04:25:44 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'localpool'
Oct 11 04:25:44 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'mds_autoscaler'
Oct 11 04:25:45 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'mirroring'
Oct 11 04:25:45 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'nfs'
Oct 11 04:25:46 compute-0 ceph-mgr[74542]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 11 04:25:46 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'orchestrator'
Oct 11 04:25:46 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:46.383+0000 7fe65306d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 11 04:25:47 compute-0 ceph-mgr[74542]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 11 04:25:47 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'osd_perf_query'
Oct 11 04:25:47 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:47.028+0000 7fe65306d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 11 04:25:47 compute-0 ceph-mgr[74542]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 11 04:25:47 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'osd_support'
Oct 11 04:25:47 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:47.285+0000 7fe65306d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 11 04:25:47 compute-0 ceph-mgr[74542]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 11 04:25:47 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'pg_autoscaler'
Oct 11 04:25:47 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:47.528+0000 7fe65306d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 11 04:25:47 compute-0 ceph-mgr[74542]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 11 04:25:47 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'progress'
Oct 11 04:25:47 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:47.795+0000 7fe65306d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 11 04:25:48 compute-0 ceph-mgr[74542]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 11 04:25:48 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'prometheus'
Oct 11 04:25:48 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:48.025+0000 7fe65306d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 11 04:25:48 compute-0 ceph-mgr[74542]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 11 04:25:48 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'rbd_support'
Oct 11 04:25:48 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:48.960+0000 7fe65306d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 11 04:25:49 compute-0 ceph-mgr[74542]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 11 04:25:49 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:49.236+0000 7fe65306d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 11 04:25:49 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'restful'
Oct 11 04:25:49 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'rgw'
Oct 11 04:25:50 compute-0 ceph-mgr[74542]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 11 04:25:50 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'rook'
Oct 11 04:25:50 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:50.580+0000 7fe65306d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 11 04:25:52 compute-0 ceph-mgr[74542]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 11 04:25:52 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'selftest'
Oct 11 04:25:52 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:52.586+0000 7fe65306d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 11 04:25:52 compute-0 ceph-mgr[74542]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 11 04:25:52 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'snap_schedule'
Oct 11 04:25:52 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:52.827+0000 7fe65306d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 11 04:25:53 compute-0 ceph-mgr[74542]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 11 04:25:53 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'stats'
Oct 11 04:25:53 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:53.075+0000 7fe65306d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 11 04:25:53 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'status'
Oct 11 04:25:53 compute-0 ceph-mgr[74542]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 11 04:25:53 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'telegraf'
Oct 11 04:25:53 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:53.574+0000 7fe65306d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 11 04:25:53 compute-0 ceph-mgr[74542]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 11 04:25:53 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'telemetry'
Oct 11 04:25:53 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:53.807+0000 7fe65306d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 11 04:25:54 compute-0 ceph-mgr[74542]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 11 04:25:54 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'test_orchestrator'
Oct 11 04:25:54 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:54.413+0000 7fe65306d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 11 04:25:55 compute-0 ceph-mgr[74542]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 11 04:25:55 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'volumes'
Oct 11 04:25:55 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:55.077+0000 7fe65306d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 11 04:25:55 compute-0 ceph-mgr[74542]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 11 04:25:55 compute-0 ceph-mgr[74542]: mgr[py] Loading python module 'zabbix'
Oct 11 04:25:55 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:55.767+0000 7fe65306d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 11 04:25:55 compute-0 ceph-mgr[74542]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 11 04:25:55 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:25:55.995+0000 7fe65306d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 11 04:25:55 compute-0 ceph-mon[74243]: log_channel(cluster) log [INF] : Active manager daemon compute-0.phooxi restarted
Oct 11 04:25:55 compute-0 ceph-mgr[74542]: ms_deliver_dispatch: unhandled message 0x56131e10f1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct 11 04:25:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct 11 04:25:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:25:55 compute-0 ceph-mon[74243]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.phooxi
Oct 11 04:25:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 11 04:25:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 11 04:25:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: mgr handle_mgr_map Activating!
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: mgr handle_mgr_map I am now activating
Oct 11 04:25:56 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct 11 04:25:56 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.phooxi(active, starting, since 0.0171941s)
Oct 11 04:25:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 11 04:25:56 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 11 04:25:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.phooxi", "id": "compute-0.phooxi"} v 0) v1
Oct 11 04:25:56 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mgr metadata", "who": "compute-0.phooxi", "id": "compute-0.phooxi"}]: dispatch
Oct 11 04:25:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct 11 04:25:56 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 11 04:25:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).mds e1 all = 1
Oct 11 04:25:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 11 04:25:56 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 11 04:25:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct 11 04:25:56 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: balancer
Oct 11 04:25:56 compute-0 ceph-mon[74243]: log_channel(cluster) log [INF] : Manager daemon compute-0.phooxi is now available
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Starting
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:25:56
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [balancer INFO root] No pools available
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:56 compute-0 ceph-mon[74243]: Active manager daemon compute-0.phooxi restarted
Oct 11 04:25:56 compute-0 ceph-mon[74243]: Activating manager daemon compute-0.phooxi
Oct 11 04:25:56 compute-0 ceph-mon[74243]: osdmap e2: 0 total, 0 up, 0 in
Oct 11 04:25:56 compute-0 ceph-mon[74243]: mgrmap e6: compute-0.phooxi(active, starting, since 0.0171941s)
Oct 11 04:25:56 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 11 04:25:56 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mgr metadata", "who": "compute-0.phooxi", "id": "compute-0.phooxi"}]: dispatch
Oct 11 04:25:56 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 11 04:25:56 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 11 04:25:56 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 11 04:25:56 compute-0 ceph-mon[74243]: Manager daemon compute-0.phooxi is now available
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct 11 04:25:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Oct 11 04:25:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Oct 11 04:25:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: cephadm
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: crash
Oct 11 04:25:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 11 04:25:56 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: devicehealth
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [devicehealth INFO root] Starting
Oct 11 04:25:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 11 04:25:56 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: iostat
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: nfs
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: orchestrator
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: pg_autoscaler
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: progress
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [progress INFO root] Loading...
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [progress INFO root] No stored events to load
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [progress INFO root] Loaded [] historic events
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [progress INFO root] Loaded OSDMap, ready.
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] recovery thread starting
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] starting setup
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: rbd_support
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: restful
Oct 11 04:25:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.phooxi/mirror_snapshot_schedule"} v 0) v1
Oct 11 04:25:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.phooxi/mirror_snapshot_schedule"}]: dispatch
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [restful INFO root] server_addr: :: server_port: 8003
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [restful WARNING root] server not running: no certificate configured
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: status
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: telemetry
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] PerfHandler: starting
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TaskHandler: starting
Oct 11 04:25:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.phooxi/trash_purge_schedule"} v 0) v1
Oct 11 04:25:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.phooxi/trash_purge_schedule"}]: dispatch
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] setup complete
Oct 11 04:25:56 compute-0 ceph-mgr[74542]: mgr load Constructed class from module: volumes
Oct 11 04:25:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Oct 11 04:25:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Oct 11 04:25:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:57 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 11 04:25:57 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.phooxi(active, since 1.02785s)
Oct 11 04:25:57 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 11 04:25:57 compute-0 nostalgic_galileo[75368]: {
Oct 11 04:25:57 compute-0 nostalgic_galileo[75368]:     "mgrmap_epoch": 7,
Oct 11 04:25:57 compute-0 nostalgic_galileo[75368]:     "initialized": true
Oct 11 04:25:57 compute-0 nostalgic_galileo[75368]: }
Oct 11 04:25:57 compute-0 systemd[1]: libpod-08023db5bef41cbdf309c8421d18d4d25b7cf53cab9edd73e88bef7c655e32ca.scope: Deactivated successfully.
Oct 11 04:25:57 compute-0 podman[75351]: 2025-10-11 04:25:57.058638743 +0000 UTC m=+18.909342024 container died 08023db5bef41cbdf309c8421d18d4d25b7cf53cab9edd73e88bef7c655e32ca (image=quay.io/ceph/ceph:v18, name=nostalgic_galileo, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:25:57 compute-0 ceph-mon[74243]: Found migration_current of "None". Setting to last migration.
Oct 11 04:25:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 04:25:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 04:25:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.phooxi/mirror_snapshot_schedule"}]: dispatch
Oct 11 04:25:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.phooxi/trash_purge_schedule"}]: dispatch
Oct 11 04:25:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:57 compute-0 ceph-mon[74243]: mgrmap e7: compute-0.phooxi(active, since 1.02785s)
Oct 11 04:25:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a07b1c74f4add87e4191bab4e3c01da7928f50b6560104f165b480c36a989eae-merged.mount: Deactivated successfully.
Oct 11 04:25:57 compute-0 podman[75351]: 2025-10-11 04:25:57.122654162 +0000 UTC m=+18.973357423 container remove 08023db5bef41cbdf309c8421d18d4d25b7cf53cab9edd73e88bef7c655e32ca (image=quay.io/ceph/ceph:v18, name=nostalgic_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:25:57 compute-0 systemd[1]: libpod-conmon-08023db5bef41cbdf309c8421d18d4d25b7cf53cab9edd73e88bef7c655e32ca.scope: Deactivated successfully.
Oct 11 04:25:57 compute-0 podman[75527]: 2025-10-11 04:25:57.214318677 +0000 UTC m=+0.064588950 container create 95ecac8418609d73f89771f6b627c7d7ac41143e09bf0d49719796ad1ca8f60d (image=quay.io/ceph/ceph:v18, name=practical_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:25:57 compute-0 systemd[1]: Started libpod-conmon-95ecac8418609d73f89771f6b627c7d7ac41143e09bf0d49719796ad1ca8f60d.scope.
Oct 11 04:25:57 compute-0 podman[75527]: 2025-10-11 04:25:57.1837137 +0000 UTC m=+0.033984023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:57 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4938777dd8456d6df8bdc19e4e1f3b66305b00a9abb397b479d590090d3a451d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4938777dd8456d6df8bdc19e4e1f3b66305b00a9abb397b479d590090d3a451d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4938777dd8456d6df8bdc19e4e1f3b66305b00a9abb397b479d590090d3a451d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:57 compute-0 podman[75527]: 2025-10-11 04:25:57.319285917 +0000 UTC m=+0.169556260 container init 95ecac8418609d73f89771f6b627c7d7ac41143e09bf0d49719796ad1ca8f60d (image=quay.io/ceph/ceph:v18, name=practical_hawking, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:25:57 compute-0 podman[75527]: 2025-10-11 04:25:57.329104442 +0000 UTC m=+0.179374715 container start 95ecac8418609d73f89771f6b627c7d7ac41143e09bf0d49719796ad1ca8f60d (image=quay.io/ceph/ceph:v18, name=practical_hawking, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:25:57 compute-0 podman[75527]: 2025-10-11 04:25:57.333301478 +0000 UTC m=+0.183571801 container attach 95ecac8418609d73f89771f6b627c7d7ac41143e09bf0d49719796ad1ca8f60d (image=quay.io/ceph/ceph:v18, name=practical_hawking, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:25:57 compute-0 ceph-mgr[74542]: [cephadm INFO cherrypy.error] [11/Oct/2025:04:25:57] ENGINE Bus STARTING
Oct 11 04:25:57 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : [11/Oct/2025:04:25:57] ENGINE Bus STARTING
Oct 11 04:25:57 compute-0 ceph-mgr[74542]: [cephadm INFO cherrypy.error] [11/Oct/2025:04:25:57] ENGINE Serving on http://192.168.122.100:8765
Oct 11 04:25:57 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : [11/Oct/2025:04:25:57] ENGINE Serving on http://192.168.122.100:8765
Oct 11 04:25:57 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:25:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Oct 11 04:25:57 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 11 04:25:57 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 04:25:57 compute-0 systemd[1]: libpod-95ecac8418609d73f89771f6b627c7d7ac41143e09bf0d49719796ad1ca8f60d.scope: Deactivated successfully.
Oct 11 04:25:57 compute-0 podman[75527]: 2025-10-11 04:25:57.920802445 +0000 UTC m=+0.771072688 container died 95ecac8418609d73f89771f6b627c7d7ac41143e09bf0d49719796ad1ca8f60d (image=quay.io/ceph/ceph:v18, name=practical_hawking, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:25:57 compute-0 ceph-mgr[74542]: [cephadm INFO cherrypy.error] [11/Oct/2025:04:25:57] ENGINE Serving on https://192.168.122.100:7150
Oct 11 04:25:57 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : [11/Oct/2025:04:25:57] ENGINE Serving on https://192.168.122.100:7150
Oct 11 04:25:57 compute-0 ceph-mgr[74542]: [cephadm INFO cherrypy.error] [11/Oct/2025:04:25:57] ENGINE Bus STARTED
Oct 11 04:25:57 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : [11/Oct/2025:04:25:57] ENGINE Bus STARTED
Oct 11 04:25:57 compute-0 ceph-mgr[74542]: [cephadm INFO cherrypy.error] [11/Oct/2025:04:25:57] ENGINE Client ('192.168.122.100', 60506) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 11 04:25:57 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : [11/Oct/2025:04:25:57] ENGINE Client ('192.168.122.100', 60506) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 11 04:25:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 11 04:25:57 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 04:25:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4938777dd8456d6df8bdc19e4e1f3b66305b00a9abb397b479d590090d3a451d-merged.mount: Deactivated successfully.
Oct 11 04:25:57 compute-0 podman[75527]: 2025-10-11 04:25:57.96079211 +0000 UTC m=+0.811062373 container remove 95ecac8418609d73f89771f6b627c7d7ac41143e09bf0d49719796ad1ca8f60d (image=quay.io/ceph/ceph:v18, name=practical_hawking, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 04:25:57 compute-0 systemd[1]: libpod-conmon-95ecac8418609d73f89771f6b627c7d7ac41143e09bf0d49719796ad1ca8f60d.scope: Deactivated successfully.
Oct 11 04:25:58 compute-0 ceph-mgr[74542]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:25:58 compute-0 podman[75605]: 2025-10-11 04:25:58.031183356 +0000 UTC m=+0.050346732 container create d5c938906217e3f56e879fed012002cd405ebf81104260e81a1b23279aed9448 (image=quay.io/ceph/ceph:v18, name=elastic_darwin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:25:58 compute-0 systemd[1]: Started libpod-conmon-d5c938906217e3f56e879fed012002cd405ebf81104260e81a1b23279aed9448.scope.
Oct 11 04:25:58 compute-0 ceph-mon[74243]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 11 04:25:58 compute-0 ceph-mon[74243]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 11 04:25:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 04:25:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 04:25:58 compute-0 podman[75605]: 2025-10-11 04:25:58.002711578 +0000 UTC m=+0.021874954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8828e5fd8c0d5c69243796db25c3860e500a89db554848757f55cc50a91401/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8828e5fd8c0d5c69243796db25c3860e500a89db554848757f55cc50a91401/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8828e5fd8c0d5c69243796db25c3860e500a89db554848757f55cc50a91401/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:58 compute-0 podman[75605]: 2025-10-11 04:25:58.130095901 +0000 UTC m=+0.149259327 container init d5c938906217e3f56e879fed012002cd405ebf81104260e81a1b23279aed9448 (image=quay.io/ceph/ceph:v18, name=elastic_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:25:58 compute-0 podman[75605]: 2025-10-11 04:25:58.139975008 +0000 UTC m=+0.159138374 container start d5c938906217e3f56e879fed012002cd405ebf81104260e81a1b23279aed9448 (image=quay.io/ceph/ceph:v18, name=elastic_darwin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:25:58 compute-0 podman[75605]: 2025-10-11 04:25:58.143631983 +0000 UTC m=+0.162795349 container attach d5c938906217e3f56e879fed012002cd405ebf81104260e81a1b23279aed9448 (image=quay.io/ceph/ceph:v18, name=elastic_darwin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 04:25:58 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:25:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Oct 11 04:25:58 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:58 compute-0 ceph-mgr[74542]: [cephadm INFO root] Set ssh ssh_user
Oct 11 04:25:58 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct 11 04:25:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Oct 11 04:25:58 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:58 compute-0 ceph-mgr[74542]: [cephadm INFO root] Set ssh ssh_config
Oct 11 04:25:58 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct 11 04:25:58 compute-0 ceph-mgr[74542]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct 11 04:25:58 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct 11 04:25:58 compute-0 elastic_darwin[75621]: ssh user set to ceph-admin. sudo will be used
Oct 11 04:25:58 compute-0 systemd[1]: libpod-d5c938906217e3f56e879fed012002cd405ebf81104260e81a1b23279aed9448.scope: Deactivated successfully.
Oct 11 04:25:58 compute-0 podman[75605]: 2025-10-11 04:25:58.701558162 +0000 UTC m=+0.720721498 container died d5c938906217e3f56e879fed012002cd405ebf81104260e81a1b23279aed9448 (image=quay.io/ceph/ceph:v18, name=elastic_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:25:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c8828e5fd8c0d5c69243796db25c3860e500a89db554848757f55cc50a91401-merged.mount: Deactivated successfully.
Oct 11 04:25:58 compute-0 podman[75605]: 2025-10-11 04:25:58.73835446 +0000 UTC m=+0.757517786 container remove d5c938906217e3f56e879fed012002cd405ebf81104260e81a1b23279aed9448 (image=quay.io/ceph/ceph:v18, name=elastic_darwin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:25:58 compute-0 systemd[1]: libpod-conmon-d5c938906217e3f56e879fed012002cd405ebf81104260e81a1b23279aed9448.scope: Deactivated successfully.
Oct 11 04:25:58 compute-0 podman[75662]: 2025-10-11 04:25:58.791280076 +0000 UTC m=+0.036317680 container create 6fe5f2588cf9f8681692eba42646f7821318e54676621c59895fdeed7e8abc19 (image=quay.io/ceph/ceph:v18, name=clever_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:25:58 compute-0 systemd[1]: Started libpod-conmon-6fe5f2588cf9f8681692eba42646f7821318e54676621c59895fdeed7e8abc19.scope.
Oct 11 04:25:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f1cd5fd8111bd47a562c134777d1befa7dc43314d4ffa2425670a07320e971/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f1cd5fd8111bd47a562c134777d1befa7dc43314d4ffa2425670a07320e971/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f1cd5fd8111bd47a562c134777d1befa7dc43314d4ffa2425670a07320e971/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f1cd5fd8111bd47a562c134777d1befa7dc43314d4ffa2425670a07320e971/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f1cd5fd8111bd47a562c134777d1befa7dc43314d4ffa2425670a07320e971/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:58 compute-0 podman[75662]: 2025-10-11 04:25:58.859041083 +0000 UTC m=+0.104078787 container init 6fe5f2588cf9f8681692eba42646f7821318e54676621c59895fdeed7e8abc19 (image=quay.io/ceph/ceph:v18, name=clever_shaw, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:25:58 compute-0 podman[75662]: 2025-10-11 04:25:58.864032109 +0000 UTC m=+0.109069753 container start 6fe5f2588cf9f8681692eba42646f7821318e54676621c59895fdeed7e8abc19 (image=quay.io/ceph/ceph:v18, name=clever_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:25:58 compute-0 podman[75662]: 2025-10-11 04:25:58.867988316 +0000 UTC m=+0.113025960 container attach 6fe5f2588cf9f8681692eba42646f7821318e54676621c59895fdeed7e8abc19 (image=quay.io/ceph/ceph:v18, name=clever_shaw, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:25:58 compute-0 podman[75662]: 2025-10-11 04:25:58.773918831 +0000 UTC m=+0.018956445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:58 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.phooxi(active, since 2s)
Oct 11 04:25:59 compute-0 ceph-mon[74243]: [11/Oct/2025:04:25:57] ENGINE Bus STARTING
Oct 11 04:25:59 compute-0 ceph-mon[74243]: [11/Oct/2025:04:25:57] ENGINE Serving on http://192.168.122.100:8765
Oct 11 04:25:59 compute-0 ceph-mon[74243]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:25:59 compute-0 ceph-mon[74243]: [11/Oct/2025:04:25:57] ENGINE Serving on https://192.168.122.100:7150
Oct 11 04:25:59 compute-0 ceph-mon[74243]: [11/Oct/2025:04:25:57] ENGINE Bus STARTED
Oct 11 04:25:59 compute-0 ceph-mon[74243]: [11/Oct/2025:04:25:57] ENGINE Client ('192.168.122.100', 60506) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 11 04:25:59 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:59 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:59 compute-0 ceph-mon[74243]: mgrmap e8: compute-0.phooxi(active, since 2s)
Oct 11 04:25:59 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:25:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Oct 11 04:25:59 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:25:59 compute-0 ceph-mgr[74542]: [cephadm INFO root] Set ssh ssh_identity_key
Oct 11 04:25:59 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct 11 04:25:59 compute-0 ceph-mgr[74542]: [cephadm INFO root] Set ssh private key
Oct 11 04:25:59 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Set ssh private key
Oct 11 04:25:59 compute-0 systemd[1]: libpod-6fe5f2588cf9f8681692eba42646f7821318e54676621c59895fdeed7e8abc19.scope: Deactivated successfully.
Oct 11 04:25:59 compute-0 podman[75704]: 2025-10-11 04:25:59.450483298 +0000 UTC m=+0.037677971 container died 6fe5f2588cf9f8681692eba42646f7821318e54676621c59895fdeed7e8abc19 (image=quay.io/ceph/ceph:v18, name=clever_shaw, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:25:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-43f1cd5fd8111bd47a562c134777d1befa7dc43314d4ffa2425670a07320e971-merged.mount: Deactivated successfully.
Oct 11 04:25:59 compute-0 podman[75704]: 2025-10-11 04:25:59.505220791 +0000 UTC m=+0.092415404 container remove 6fe5f2588cf9f8681692eba42646f7821318e54676621c59895fdeed7e8abc19 (image=quay.io/ceph/ceph:v18, name=clever_shaw, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:25:59 compute-0 systemd[1]: libpod-conmon-6fe5f2588cf9f8681692eba42646f7821318e54676621c59895fdeed7e8abc19.scope: Deactivated successfully.
Oct 11 04:25:59 compute-0 podman[75719]: 2025-10-11 04:25:59.610896517 +0000 UTC m=+0.067679465 container create 1259670c572eb2f54379205d696f8ca07185a17f2cd90e5f2579cc1599e8951b (image=quay.io/ceph/ceph:v18, name=awesome_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:25:59 compute-0 systemd[1]: Started libpod-conmon-1259670c572eb2f54379205d696f8ca07185a17f2cd90e5f2579cc1599e8951b.scope.
Oct 11 04:25:59 compute-0 podman[75719]: 2025-10-11 04:25:59.582122078 +0000 UTC m=+0.038905076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:25:59 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06252e686b497301d37299ad8d9a89c588ff9b2aee89c17c43ad4f6830225488/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06252e686b497301d37299ad8d9a89c588ff9b2aee89c17c43ad4f6830225488/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06252e686b497301d37299ad8d9a89c588ff9b2aee89c17c43ad4f6830225488/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06252e686b497301d37299ad8d9a89c588ff9b2aee89c17c43ad4f6830225488/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06252e686b497301d37299ad8d9a89c588ff9b2aee89c17c43ad4f6830225488/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:59 compute-0 podman[75719]: 2025-10-11 04:25:59.705559345 +0000 UTC m=+0.162342323 container init 1259670c572eb2f54379205d696f8ca07185a17f2cd90e5f2579cc1599e8951b (image=quay.io/ceph/ceph:v18, name=awesome_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:25:59 compute-0 podman[75719]: 2025-10-11 04:25:59.723567704 +0000 UTC m=+0.180350612 container start 1259670c572eb2f54379205d696f8ca07185a17f2cd90e5f2579cc1599e8951b (image=quay.io/ceph/ceph:v18, name=awesome_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:25:59 compute-0 podman[75719]: 2025-10-11 04:25:59.727965977 +0000 UTC m=+0.184748925 container attach 1259670c572eb2f54379205d696f8ca07185a17f2cd90e5f2579cc1599e8951b (image=quay.io/ceph/ceph:v18, name=awesome_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:25:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019918282 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:00 compute-0 ceph-mgr[74542]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:26:00 compute-0 ceph-mon[74243]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:26:00 compute-0 ceph-mon[74243]: Set ssh ssh_user
Oct 11 04:26:00 compute-0 ceph-mon[74243]: Set ssh ssh_config
Oct 11 04:26:00 compute-0 ceph-mon[74243]: ssh user set to ceph-admin. sudo will be used
Oct 11 04:26:00 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:00 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:26:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Oct 11 04:26:00 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:00 compute-0 ceph-mgr[74542]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct 11 04:26:00 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct 11 04:26:00 compute-0 systemd[1]: libpod-1259670c572eb2f54379205d696f8ca07185a17f2cd90e5f2579cc1599e8951b.scope: Deactivated successfully.
Oct 11 04:26:00 compute-0 podman[75719]: 2025-10-11 04:26:00.269183195 +0000 UTC m=+0.725966143 container died 1259670c572eb2f54379205d696f8ca07185a17f2cd90e5f2579cc1599e8951b (image=quay.io/ceph/ceph:v18, name=awesome_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:26:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-06252e686b497301d37299ad8d9a89c588ff9b2aee89c17c43ad4f6830225488-merged.mount: Deactivated successfully.
Oct 11 04:26:00 compute-0 podman[75719]: 2025-10-11 04:26:00.333038818 +0000 UTC m=+0.789821756 container remove 1259670c572eb2f54379205d696f8ca07185a17f2cd90e5f2579cc1599e8951b (image=quay.io/ceph/ceph:v18, name=awesome_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:00 compute-0 systemd[1]: libpod-conmon-1259670c572eb2f54379205d696f8ca07185a17f2cd90e5f2579cc1599e8951b.scope: Deactivated successfully.
Oct 11 04:26:00 compute-0 podman[75776]: 2025-10-11 04:26:00.43700441 +0000 UTC m=+0.069794114 container create 1dd65372bcf1362c097d32a7c53bd856a1236b87cddc03b24cb4e2ddbe2e243d (image=quay.io/ceph/ceph:v18, name=amazing_poitras, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:00 compute-0 systemd[1]: Started libpod-conmon-1dd65372bcf1362c097d32a7c53bd856a1236b87cddc03b24cb4e2ddbe2e243d.scope.
Oct 11 04:26:00 compute-0 podman[75776]: 2025-10-11 04:26:00.412118696 +0000 UTC m=+0.044908460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:26:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cda400dd6c3ae86fb5b3c1b6541bae4007674beaf9b6ef806de2c4fca04f316c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cda400dd6c3ae86fb5b3c1b6541bae4007674beaf9b6ef806de2c4fca04f316c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cda400dd6c3ae86fb5b3c1b6541bae4007674beaf9b6ef806de2c4fca04f316c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:00 compute-0 podman[75776]: 2025-10-11 04:26:00.53551358 +0000 UTC m=+0.168303304 container init 1dd65372bcf1362c097d32a7c53bd856a1236b87cddc03b24cb4e2ddbe2e243d (image=quay.io/ceph/ceph:v18, name=amazing_poitras, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:00 compute-0 podman[75776]: 2025-10-11 04:26:00.543946584 +0000 UTC m=+0.176736258 container start 1dd65372bcf1362c097d32a7c53bd856a1236b87cddc03b24cb4e2ddbe2e243d (image=quay.io/ceph/ceph:v18, name=amazing_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:26:00 compute-0 podman[75776]: 2025-10-11 04:26:00.547516436 +0000 UTC m=+0.180306200 container attach 1dd65372bcf1362c097d32a7c53bd856a1236b87cddc03b24cb4e2ddbe2e243d (image=quay.io/ceph/ceph:v18, name=amazing_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 04:26:01 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:26:01 compute-0 amazing_poitras[75792]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDK1g2qzWmsnC3VRWapyAGPRjoKds+pJAg8VOvoE+DoQslVkrSD55DlxLpvmBbfC2Ka9BKKL7nAZQE9tg6dZ9ZbwEA+1Vc2OqQIibPihuUq3K8O6c6vfYQvAJPxoF8yP6Y1TzaNie7lCF8h/qG6ZYK+2zeznO+RMuo+/hZ/TWvJGNAW3UUhsS8bUyUMH1yIDSjrvsQXHO4QnhE6e0B8/uEOx3PKv9H1SoDETEqAO6uaRgo/xCK7L+PAm1PUL0eDQfjfAIZK3TPYEvqaPc5XgOl7HXm7aGyVaEF0EfTELprXbJwZsV+osYTZrHe+CScbDvU1ollLx5DtnBdWUXbHJ4eC+jSnoZlH6KPnK8/qUJjAY6trcJLCOBusBeWhZCfEEwtDCM4iT+MMU7EHPwSeE4T2xrM003xgy1DVKTCArb8n+tJQJFHhw9VhDsMGx9dDUqkk5l64vXNNfIijYEqKiK9VU5UHhzfY1lpj+fPGraWkLAlFDEV7AQxaAToXMQFXK38= zuul@controller
Oct 11 04:26:01 compute-0 systemd[1]: libpod-1dd65372bcf1362c097d32a7c53bd856a1236b87cddc03b24cb4e2ddbe2e243d.scope: Deactivated successfully.
Oct 11 04:26:01 compute-0 podman[75776]: 2025-10-11 04:26:01.086497251 +0000 UTC m=+0.719286935 container died 1dd65372bcf1362c097d32a7c53bd856a1236b87cddc03b24cb4e2ddbe2e243d (image=quay.io/ceph/ceph:v18, name=amazing_poitras, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:26:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-cda400dd6c3ae86fb5b3c1b6541bae4007674beaf9b6ef806de2c4fca04f316c-merged.mount: Deactivated successfully.
Oct 11 04:26:01 compute-0 podman[75776]: 2025-10-11 04:26:01.13840818 +0000 UTC m=+0.771197874 container remove 1dd65372bcf1362c097d32a7c53bd856a1236b87cddc03b24cb4e2ddbe2e243d (image=quay.io/ceph/ceph:v18, name=amazing_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:26:01 compute-0 systemd[1]: libpod-conmon-1dd65372bcf1362c097d32a7c53bd856a1236b87cddc03b24cb4e2ddbe2e243d.scope: Deactivated successfully.
Oct 11 04:26:01 compute-0 podman[75832]: 2025-10-11 04:26:01.225622459 +0000 UTC m=+0.058868497 container create 1045f1f5e42f403531d61fed94d89505ce8a5c5188897fc8fa0b5bb132bfe650 (image=quay.io/ceph/ceph:v18, name=blissful_haslett, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:01 compute-0 ceph-mon[74243]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:26:01 compute-0 ceph-mon[74243]: Set ssh ssh_identity_key
Oct 11 04:26:01 compute-0 ceph-mon[74243]: Set ssh private key
Oct 11 04:26:01 compute-0 ceph-mon[74243]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:26:01 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:01 compute-0 ceph-mon[74243]: Set ssh ssh_identity_pub
Oct 11 04:26:01 compute-0 systemd[1]: Started libpod-conmon-1045f1f5e42f403531d61fed94d89505ce8a5c5188897fc8fa0b5bb132bfe650.scope.
Oct 11 04:26:01 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/695b2fd28a20e797a4c6ec5d3c4884e770530ee6e3eb083dd9a272c2d164b8fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/695b2fd28a20e797a4c6ec5d3c4884e770530ee6e3eb083dd9a272c2d164b8fc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/695b2fd28a20e797a4c6ec5d3c4884e770530ee6e3eb083dd9a272c2d164b8fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:01 compute-0 podman[75832]: 2025-10-11 04:26:01.202548302 +0000 UTC m=+0.035793950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:26:01 compute-0 podman[75832]: 2025-10-11 04:26:01.314015503 +0000 UTC m=+0.147261151 container init 1045f1f5e42f403531d61fed94d89505ce8a5c5188897fc8fa0b5bb132bfe650 (image=quay.io/ceph/ceph:v18, name=blissful_haslett, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:26:01 compute-0 podman[75832]: 2025-10-11 04:26:01.326388313 +0000 UTC m=+0.159633981 container start 1045f1f5e42f403531d61fed94d89505ce8a5c5188897fc8fa0b5bb132bfe650 (image=quay.io/ceph/ceph:v18, name=blissful_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:26:01 compute-0 podman[75832]: 2025-10-11 04:26:01.331475952 +0000 UTC m=+0.164721620 container attach 1045f1f5e42f403531d61fed94d89505ce8a5c5188897fc8fa0b5bb132bfe650 (image=quay.io/ceph/ceph:v18, name=blissful_haslett, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:01 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:26:02 compute-0 ceph-mgr[74542]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:26:02 compute-0 sshd-session[75874]: Accepted publickey for ceph-admin from 192.168.122.100 port 54480 ssh2: RSA SHA256:9gGYMYwZ28p/VOGMeKhrIP0HSYfKPCU43XYj7dXSptk
Oct 11 04:26:02 compute-0 systemd-logind[801]: New session 22 of user ceph-admin.
Oct 11 04:26:02 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct 11 04:26:02 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 11 04:26:02 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 11 04:26:02 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct 11 04:26:02 compute-0 systemd[75878]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 04:26:02 compute-0 ceph-mon[74243]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:26:02 compute-0 sshd-session[75883]: Accepted publickey for ceph-admin from 192.168.122.100 port 54494 ssh2: RSA SHA256:9gGYMYwZ28p/VOGMeKhrIP0HSYfKPCU43XYj7dXSptk
Oct 11 04:26:02 compute-0 systemd-logind[801]: New session 24 of user ceph-admin.
Oct 11 04:26:02 compute-0 systemd[75878]: Queued start job for default target Main User Target.
Oct 11 04:26:02 compute-0 systemd[75878]: Created slice User Application Slice.
Oct 11 04:26:02 compute-0 systemd[75878]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 11 04:26:02 compute-0 systemd[75878]: Started Daily Cleanup of User's Temporary Directories.
Oct 11 04:26:02 compute-0 systemd[75878]: Reached target Paths.
Oct 11 04:26:02 compute-0 systemd[75878]: Reached target Timers.
Oct 11 04:26:02 compute-0 systemd[75878]: Starting D-Bus User Message Bus Socket...
Oct 11 04:26:02 compute-0 systemd[75878]: Starting Create User's Volatile Files and Directories...
Oct 11 04:26:02 compute-0 systemd[75878]: Finished Create User's Volatile Files and Directories.
Oct 11 04:26:02 compute-0 systemd[75878]: Listening on D-Bus User Message Bus Socket.
Oct 11 04:26:02 compute-0 systemd[75878]: Reached target Sockets.
Oct 11 04:26:02 compute-0 systemd[75878]: Reached target Basic System.
Oct 11 04:26:02 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct 11 04:26:02 compute-0 systemd[75878]: Reached target Main User Target.
Oct 11 04:26:02 compute-0 systemd[75878]: Startup finished in 165ms.
Oct 11 04:26:02 compute-0 systemd[1]: Started Session 22 of User ceph-admin.
Oct 11 04:26:02 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Oct 11 04:26:02 compute-0 sshd-session[75874]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 04:26:02 compute-0 sshd-session[75883]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 04:26:02 compute-0 sudo[75898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:02 compute-0 sudo[75898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:02 compute-0 sudo[75898]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:02 compute-0 sudo[75923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:02 compute-0 sudo[75923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:02 compute-0 sudo[75923]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:02 compute-0 sshd-session[75948]: Accepted publickey for ceph-admin from 192.168.122.100 port 47462 ssh2: RSA SHA256:9gGYMYwZ28p/VOGMeKhrIP0HSYfKPCU43XYj7dXSptk
Oct 11 04:26:02 compute-0 systemd-logind[801]: New session 25 of user ceph-admin.
Oct 11 04:26:02 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Oct 11 04:26:02 compute-0 sshd-session[75948]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 04:26:02 compute-0 sudo[75952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:02 compute-0 sudo[75952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:02 compute-0 sudo[75952]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:03 compute-0 sudo[75977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 11 04:26:03 compute-0 sudo[75977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:03 compute-0 sudo[75977]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:03 compute-0 ceph-mon[74243]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:26:03 compute-0 sshd-session[76002]: Accepted publickey for ceph-admin from 192.168.122.100 port 47474 ssh2: RSA SHA256:9gGYMYwZ28p/VOGMeKhrIP0HSYfKPCU43XYj7dXSptk
Oct 11 04:26:03 compute-0 systemd-logind[801]: New session 26 of user ceph-admin.
Oct 11 04:26:03 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Oct 11 04:26:03 compute-0 sshd-session[76002]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 04:26:03 compute-0 sudo[76006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:03 compute-0 sudo[76006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:03 compute-0 sudo[76006]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:03 compute-0 sudo[76031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Oct 11 04:26:03 compute-0 sudo[76031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:03 compute-0 sudo[76031]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:03 compute-0 ceph-mgr[74542]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct 11 04:26:03 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct 11 04:26:03 compute-0 sshd-session[76056]: Accepted publickey for ceph-admin from 192.168.122.100 port 47490 ssh2: RSA SHA256:9gGYMYwZ28p/VOGMeKhrIP0HSYfKPCU43XYj7dXSptk
Oct 11 04:26:03 compute-0 systemd-logind[801]: New session 27 of user ceph-admin.
Oct 11 04:26:03 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Oct 11 04:26:03 compute-0 sshd-session[76056]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 04:26:03 compute-0 sudo[76060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:03 compute-0 sudo[76060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:03 compute-0 sudo[76060]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:04 compute-0 ceph-mgr[74542]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:26:04 compute-0 sudo[76085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:26:04 compute-0 sudo[76085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:04 compute-0 sudo[76085]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:04 compute-0 sshd-session[76110]: Accepted publickey for ceph-admin from 192.168.122.100 port 47506 ssh2: RSA SHA256:9gGYMYwZ28p/VOGMeKhrIP0HSYfKPCU43XYj7dXSptk
Oct 11 04:26:04 compute-0 systemd-logind[801]: New session 28 of user ceph-admin.
Oct 11 04:26:04 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Oct 11 04:26:04 compute-0 sshd-session[76110]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 04:26:04 compute-0 ceph-mon[74243]: Deploying cephadm binary to compute-0
Oct 11 04:26:04 compute-0 sudo[76114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:04 compute-0 sudo[76114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:04 compute-0 sudo[76114]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:04 compute-0 sudo[76139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:26:04 compute-0 sudo[76139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:04 compute-0 sudo[76139]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052965 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:04 compute-0 sshd-session[76164]: Accepted publickey for ceph-admin from 192.168.122.100 port 47512 ssh2: RSA SHA256:9gGYMYwZ28p/VOGMeKhrIP0HSYfKPCU43XYj7dXSptk
Oct 11 04:26:04 compute-0 systemd-logind[801]: New session 29 of user ceph-admin.
Oct 11 04:26:04 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Oct 11 04:26:04 compute-0 sshd-session[76164]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 04:26:04 compute-0 sudo[76168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:04 compute-0 sudo[76168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:04 compute-0 sudo[76168]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:05 compute-0 sudo[76193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Oct 11 04:26:05 compute-0 sudo[76193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:05 compute-0 sudo[76193]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:05 compute-0 sshd-session[76218]: Accepted publickey for ceph-admin from 192.168.122.100 port 47522 ssh2: RSA SHA256:9gGYMYwZ28p/VOGMeKhrIP0HSYfKPCU43XYj7dXSptk
Oct 11 04:26:05 compute-0 systemd-logind[801]: New session 30 of user ceph-admin.
Oct 11 04:26:05 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Oct 11 04:26:05 compute-0 sshd-session[76218]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 04:26:05 compute-0 sudo[76222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:05 compute-0 sudo[76222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:05 compute-0 sudo[76222]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:05 compute-0 sudo[76247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:26:05 compute-0 sudo[76247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:05 compute-0 sudo[76247]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:05 compute-0 sshd-session[76272]: Accepted publickey for ceph-admin from 192.168.122.100 port 47528 ssh2: RSA SHA256:9gGYMYwZ28p/VOGMeKhrIP0HSYfKPCU43XYj7dXSptk
Oct 11 04:26:05 compute-0 systemd-logind[801]: New session 31 of user ceph-admin.
Oct 11 04:26:05 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Oct 11 04:26:05 compute-0 sshd-session[76272]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 04:26:05 compute-0 sudo[76276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:05 compute-0 sudo[76276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:05 compute-0 sudo[76276]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:06 compute-0 ceph-mgr[74542]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:26:06 compute-0 sudo[76301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Oct 11 04:26:06 compute-0 sudo[76301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:06 compute-0 sudo[76301]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:06 compute-0 sshd-session[76326]: Accepted publickey for ceph-admin from 192.168.122.100 port 47540 ssh2: RSA SHA256:9gGYMYwZ28p/VOGMeKhrIP0HSYfKPCU43XYj7dXSptk
Oct 11 04:26:06 compute-0 systemd-logind[801]: New session 32 of user ceph-admin.
Oct 11 04:26:06 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Oct 11 04:26:06 compute-0 sshd-session[76326]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 04:26:06 compute-0 sshd-session[76353]: Accepted publickey for ceph-admin from 192.168.122.100 port 47542 ssh2: RSA SHA256:9gGYMYwZ28p/VOGMeKhrIP0HSYfKPCU43XYj7dXSptk
Oct 11 04:26:06 compute-0 systemd-logind[801]: New session 33 of user ceph-admin.
Oct 11 04:26:06 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Oct 11 04:26:06 compute-0 sshd-session[76353]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 04:26:06 compute-0 sudo[76357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:07 compute-0 sudo[76357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:07 compute-0 sudo[76357]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:07 compute-0 sudo[76382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Oct 11 04:26:07 compute-0 sudo[76382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:07 compute-0 sudo[76382]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:07 compute-0 sshd-session[76407]: Accepted publickey for ceph-admin from 192.168.122.100 port 47544 ssh2: RSA SHA256:9gGYMYwZ28p/VOGMeKhrIP0HSYfKPCU43XYj7dXSptk
Oct 11 04:26:07 compute-0 systemd-logind[801]: New session 34 of user ceph-admin.
Oct 11 04:26:07 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Oct 11 04:26:07 compute-0 sshd-session[76407]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 04:26:07 compute-0 sudo[76411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:07 compute-0 sudo[76411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:07 compute-0 sudo[76411]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:07 compute-0 sudo[76436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 11 04:26:07 compute-0 sudo[76436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:07 compute-0 sudo[76436]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 11 04:26:07 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:07 compute-0 ceph-mgr[74542]: [cephadm INFO root] Added host compute-0
Oct 11 04:26:07 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 11 04:26:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 11 04:26:07 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 04:26:07 compute-0 blissful_haslett[75848]: Added host 'compute-0' with addr '192.168.122.100'
Oct 11 04:26:07 compute-0 systemd[1]: libpod-1045f1f5e42f403531d61fed94d89505ce8a5c5188897fc8fa0b5bb132bfe650.scope: Deactivated successfully.
Oct 11 04:26:07 compute-0 podman[76490]: 2025-10-11 04:26:07.802785823 +0000 UTC m=+0.024995940 container died 1045f1f5e42f403531d61fed94d89505ce8a5c5188897fc8fa0b5bb132bfe650 (image=quay.io/ceph/ceph:v18, name=blissful_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:26:07 compute-0 sudo[76483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-695b2fd28a20e797a4c6ec5d3c4884e770530ee6e3eb083dd9a272c2d164b8fc-merged.mount: Deactivated successfully.
Oct 11 04:26:07 compute-0 sudo[76483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:07 compute-0 sudo[76483]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:07 compute-0 podman[76490]: 2025-10-11 04:26:07.854265295 +0000 UTC m=+0.076475412 container remove 1045f1f5e42f403531d61fed94d89505ce8a5c5188897fc8fa0b5bb132bfe650 (image=quay.io/ceph/ceph:v18, name=blissful_haslett, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:07 compute-0 systemd[1]: libpod-conmon-1045f1f5e42f403531d61fed94d89505ce8a5c5188897fc8fa0b5bb132bfe650.scope: Deactivated successfully.
Oct 11 04:26:07 compute-0 sudo[76523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:07 compute-0 sudo[76523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:07 compute-0 sudo[76523]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:07 compute-0 sudo[76555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:07 compute-0 sudo[76555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:07 compute-0 podman[76546]: 2025-10-11 04:26:07.947382895 +0000 UTC m=+0.049874584 container create e030d386023919479e6e1fc16c9ca2663a8ea8f6e403c3db99b8514088f3c912 (image=quay.io/ceph/ceph:v18, name=friendly_heisenberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:26:07 compute-0 sudo[76555]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:07 compute-0 systemd[1]: Started libpod-conmon-e030d386023919479e6e1fc16c9ca2663a8ea8f6e403c3db99b8514088f3c912.scope.
Oct 11 04:26:08 compute-0 podman[76546]: 2025-10-11 04:26:07.923657803 +0000 UTC m=+0.026149472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:26:08 compute-0 ceph-mgr[74542]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:26:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:08 compute-0 sudo[76587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Oct 11 04:26:08 compute-0 sudo[76587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0aa8478f110ec0f763b3f382b878e99ae9ede5a6affb7299aa109e89fd556a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0aa8478f110ec0f763b3f382b878e99ae9ede5a6affb7299aa109e89fd556a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0aa8478f110ec0f763b3f382b878e99ae9ede5a6affb7299aa109e89fd556a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:08 compute-0 podman[76546]: 2025-10-11 04:26:08.045628965 +0000 UTC m=+0.148120634 container init e030d386023919479e6e1fc16c9ca2663a8ea8f6e403c3db99b8514088f3c912 (image=quay.io/ceph/ceph:v18, name=friendly_heisenberg, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:08 compute-0 podman[76546]: 2025-10-11 04:26:08.055673028 +0000 UTC m=+0.158164717 container start e030d386023919479e6e1fc16c9ca2663a8ea8f6e403c3db99b8514088f3c912 (image=quay.io/ceph/ceph:v18, name=friendly_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:08 compute-0 podman[76546]: 2025-10-11 04:26:08.059304543 +0000 UTC m=+0.161796192 container attach e030d386023919479e6e1fc16c9ca2663a8ea8f6e403c3db99b8514088f3c912 (image=quay.io/ceph/ceph:v18, name=friendly_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:08 compute-0 podman[76645]: 2025-10-11 04:26:08.310026228 +0000 UTC m=+0.044530065 container create 9e33151eace6d3db5faf36996babbafcf1d21c425db9ebebf897361086e83fc0 (image=quay.io/ceph/ceph:v18, name=gallant_liskov, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:08 compute-0 systemd[1]: Started libpod-conmon-9e33151eace6d3db5faf36996babbafcf1d21c425db9ebebf897361086e83fc0.scope.
Oct 11 04:26:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:08 compute-0 podman[76645]: 2025-10-11 04:26:08.290570986 +0000 UTC m=+0.025074853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:26:08 compute-0 podman[76645]: 2025-10-11 04:26:08.406847565 +0000 UTC m=+0.141351422 container init 9e33151eace6d3db5faf36996babbafcf1d21c425db9ebebf897361086e83fc0 (image=quay.io/ceph/ceph:v18, name=gallant_liskov, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:08 compute-0 podman[76645]: 2025-10-11 04:26:08.41238215 +0000 UTC m=+0.146886037 container start 9e33151eace6d3db5faf36996babbafcf1d21c425db9ebebf897361086e83fc0 (image=quay.io/ceph/ceph:v18, name=gallant_liskov, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:08 compute-0 podman[76645]: 2025-10-11 04:26:08.417081845 +0000 UTC m=+0.151585702 container attach 9e33151eace6d3db5faf36996babbafcf1d21c425db9ebebf897361086e83fc0 (image=quay.io/ceph/ceph:v18, name=gallant_liskov, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 04:26:08 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:26:08 compute-0 ceph-mgr[74542]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct 11 04:26:08 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct 11 04:26:08 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 11 04:26:08 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:08 compute-0 friendly_heisenberg[76612]: Scheduled mon update...
Oct 11 04:26:08 compute-0 systemd[1]: libpod-e030d386023919479e6e1fc16c9ca2663a8ea8f6e403c3db99b8514088f3c912.scope: Deactivated successfully.
Oct 11 04:26:08 compute-0 podman[76546]: 2025-10-11 04:26:08.586154916 +0000 UTC m=+0.688646595 container died e030d386023919479e6e1fc16c9ca2663a8ea8f6e403c3db99b8514088f3c912 (image=quay.io/ceph/ceph:v18, name=friendly_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c0aa8478f110ec0f763b3f382b878e99ae9ede5a6affb7299aa109e89fd556a-merged.mount: Deactivated successfully.
Oct 11 04:26:08 compute-0 podman[76687]: 2025-10-11 04:26:08.658237725 +0000 UTC m=+0.059862806 container remove e030d386023919479e6e1fc16c9ca2663a8ea8f6e403c3db99b8514088f3c912 (image=quay.io/ceph/ceph:v18, name=friendly_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:08 compute-0 systemd[1]: libpod-conmon-e030d386023919479e6e1fc16c9ca2663a8ea8f6e403c3db99b8514088f3c912.scope: Deactivated successfully.
Oct 11 04:26:08 compute-0 gallant_liskov[76680]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct 11 04:26:08 compute-0 systemd[1]: libpod-9e33151eace6d3db5faf36996babbafcf1d21c425db9ebebf897361086e83fc0.scope: Deactivated successfully.
Oct 11 04:26:08 compute-0 podman[76645]: 2025-10-11 04:26:08.741203967 +0000 UTC m=+0.475707844 container died 9e33151eace6d3db5faf36996babbafcf1d21c425db9ebebf897361086e83fc0 (image=quay.io/ceph/ceph:v18, name=gallant_liskov, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:26:08 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:08 compute-0 ceph-mon[74243]: Added host compute-0
Oct 11 04:26:08 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 04:26:08 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-df99c5af4cad4740539aaa3043713bd5e2576c8b2ba1ef945cfdb957dc0ae8bc-merged.mount: Deactivated successfully.
Oct 11 04:26:08 compute-0 podman[76703]: 2025-10-11 04:26:08.786268891 +0000 UTC m=+0.088717527 container create 359d6bfc9bd2e2305aa2c8d46b009c0ae8771d6a23a0ad4681f66910648efcd0 (image=quay.io/ceph/ceph:v18, name=interesting_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 04:26:08 compute-0 podman[76645]: 2025-10-11 04:26:08.815763527 +0000 UTC m=+0.550267404 container remove 9e33151eace6d3db5faf36996babbafcf1d21c425db9ebebf897361086e83fc0 (image=quay.io/ceph/ceph:v18, name=gallant_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:26:08 compute-0 systemd[1]: libpod-conmon-9e33151eace6d3db5faf36996babbafcf1d21c425db9ebebf897361086e83fc0.scope: Deactivated successfully.
Oct 11 04:26:08 compute-0 podman[76703]: 2025-10-11 04:26:08.740087675 +0000 UTC m=+0.042536411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:26:08 compute-0 systemd[1]: Started libpod-conmon-359d6bfc9bd2e2305aa2c8d46b009c0ae8771d6a23a0ad4681f66910648efcd0.scope.
Oct 11 04:26:08 compute-0 sudo[76587]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:08 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Oct 11 04:26:08 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0754822d47aa75794768da4715a8fd64d0e00848075ee473562be3f0eb242bc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0754822d47aa75794768da4715a8fd64d0e00848075ee473562be3f0eb242bc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0754822d47aa75794768da4715a8fd64d0e00848075ee473562be3f0eb242bc5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:08 compute-0 podman[76703]: 2025-10-11 04:26:08.911555996 +0000 UTC m=+0.214004702 container init 359d6bfc9bd2e2305aa2c8d46b009c0ae8771d6a23a0ad4681f66910648efcd0 (image=quay.io/ceph/ceph:v18, name=interesting_elion, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:26:08 compute-0 podman[76703]: 2025-10-11 04:26:08.919783042 +0000 UTC m=+0.222231688 container start 359d6bfc9bd2e2305aa2c8d46b009c0ae8771d6a23a0ad4681f66910648efcd0 (image=quay.io/ceph/ceph:v18, name=interesting_elion, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:08 compute-0 podman[76703]: 2025-10-11 04:26:08.922897137 +0000 UTC m=+0.225345873 container attach 359d6bfc9bd2e2305aa2c8d46b009c0ae8771d6a23a0ad4681f66910648efcd0 (image=quay.io/ceph/ceph:v18, name=interesting_elion, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:26:08 compute-0 sudo[76734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:08 compute-0 sudo[76734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:08 compute-0 sudo[76734]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:09 compute-0 sudo[76761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:09 compute-0 sudo[76761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:09 compute-0 sudo[76761]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:09 compute-0 sudo[76786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:09 compute-0 sudo[76786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:09 compute-0 sudo[76786]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:09 compute-0 sudo[76811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 11 04:26:09 compute-0 sudo[76811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:09 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:26:09 compute-0 ceph-mgr[74542]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct 11 04:26:09 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct 11 04:26:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 11 04:26:09 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:09 compute-0 interesting_elion[76731]: Scheduled mgr update...
Oct 11 04:26:09 compute-0 systemd[1]: libpod-359d6bfc9bd2e2305aa2c8d46b009c0ae8771d6a23a0ad4681f66910648efcd0.scope: Deactivated successfully.
Oct 11 04:26:09 compute-0 podman[76703]: 2025-10-11 04:26:09.533297605 +0000 UTC m=+0.835746271 container died 359d6bfc9bd2e2305aa2c8d46b009c0ae8771d6a23a0ad4681f66910648efcd0 (image=quay.io/ceph/ceph:v18, name=interesting_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 04:26:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-0754822d47aa75794768da4715a8fd64d0e00848075ee473562be3f0eb242bc5-merged.mount: Deactivated successfully.
Oct 11 04:26:09 compute-0 sudo[76811]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:26:09 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:09 compute-0 podman[76703]: 2025-10-11 04:26:09.60152189 +0000 UTC m=+0.903970556 container remove 359d6bfc9bd2e2305aa2c8d46b009c0ae8771d6a23a0ad4681f66910648efcd0 (image=quay.io/ceph/ceph:v18, name=interesting_elion, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:09 compute-0 systemd[1]: libpod-conmon-359d6bfc9bd2e2305aa2c8d46b009c0ae8771d6a23a0ad4681f66910648efcd0.scope: Deactivated successfully.
Oct 11 04:26:09 compute-0 podman[76892]: 2025-10-11 04:26:09.681641637 +0000 UTC m=+0.050532479 container create 33c3f0457ff441bc68aa53d543301f2105197cdfb0403d5483e0bc503e412909 (image=quay.io/ceph/ceph:v18, name=keen_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:26:09 compute-0 sudo[76888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:09 compute-0 sudo[76888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:09 compute-0 sudo[76888]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:09 compute-0 systemd[1]: Started libpod-conmon-33c3f0457ff441bc68aa53d543301f2105197cdfb0403d5483e0bc503e412909.scope.
Oct 11 04:26:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:09 compute-0 sudo[76930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b01b095a47a470fb7c54b9d0164de0e9caf4a30940b884f6535c340c8a9e82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b01b095a47a470fb7c54b9d0164de0e9caf4a30940b884f6535c340c8a9e82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b01b095a47a470fb7c54b9d0164de0e9caf4a30940b884f6535c340c8a9e82/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:09 compute-0 podman[76892]: 2025-10-11 04:26:09.660904586 +0000 UTC m=+0.029795438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:26:09 compute-0 sudo[76930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:09 compute-0 podman[76892]: 2025-10-11 04:26:09.774821119 +0000 UTC m=+0.143712001 container init 33c3f0457ff441bc68aa53d543301f2105197cdfb0403d5483e0bc503e412909 (image=quay.io/ceph/ceph:v18, name=keen_mendel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:09 compute-0 sudo[76930]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:09 compute-0 podman[76892]: 2025-10-11 04:26:09.783059035 +0000 UTC m=+0.151949877 container start 33c3f0457ff441bc68aa53d543301f2105197cdfb0403d5483e0bc503e412909 (image=quay.io/ceph/ceph:v18, name=keen_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:09 compute-0 podman[76892]: 2025-10-11 04:26:09.786852686 +0000 UTC m=+0.155743528 container attach 33c3f0457ff441bc68aa53d543301f2105197cdfb0403d5483e0bc503e412909 (image=quay.io/ceph/ceph:v18, name=keen_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:09 compute-0 sudo[76960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:09 compute-0 sudo[76960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:09 compute-0 sudo[76960]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:09 compute-0 ceph-mon[74243]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:26:09 compute-0 ceph-mon[74243]: Saving service mon spec with placement count:5
Oct 11 04:26:09 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:09 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:09 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:09 compute-0 sudo[76985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 04:26:09 compute-0 sudo[76985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:10 compute-0 ceph-mgr[74542]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:26:10 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:26:10 compute-0 ceph-mgr[74542]: [cephadm INFO root] Saving service crash spec with placement *
Oct 11 04:26:10 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct 11 04:26:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 11 04:26:10 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:10 compute-0 keen_mendel[76942]: Scheduled crash update...
Oct 11 04:26:10 compute-0 systemd[1]: libpod-33c3f0457ff441bc68aa53d543301f2105197cdfb0403d5483e0bc503e412909.scope: Deactivated successfully.
Oct 11 04:26:10 compute-0 podman[76892]: 2025-10-11 04:26:10.342943476 +0000 UTC m=+0.711834368 container died 33c3f0457ff441bc68aa53d543301f2105197cdfb0403d5483e0bc503e412909 (image=quay.io/ceph/ceph:v18, name=keen_mendel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-52b01b095a47a470fb7c54b9d0164de0e9caf4a30940b884f6535c340c8a9e82-merged.mount: Deactivated successfully.
Oct 11 04:26:10 compute-0 podman[76892]: 2025-10-11 04:26:10.402718417 +0000 UTC m=+0.771609279 container remove 33c3f0457ff441bc68aa53d543301f2105197cdfb0403d5483e0bc503e412909 (image=quay.io/ceph/ceph:v18, name=keen_mendel, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:26:10 compute-0 systemd[1]: libpod-conmon-33c3f0457ff441bc68aa53d543301f2105197cdfb0403d5483e0bc503e412909.scope: Deactivated successfully.
Oct 11 04:26:10 compute-0 podman[77099]: 2025-10-11 04:26:10.48650582 +0000 UTC m=+0.060422676 container create b14ca18e2a40f04d324c66536d2a0a46623554d15363a5b5ca836ac53dd7d21a (image=quay.io/ceph/ceph:v18, name=heuristic_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:10 compute-0 systemd[1]: Started libpod-conmon-b14ca18e2a40f04d324c66536d2a0a46623554d15363a5b5ca836ac53dd7d21a.scope.
Oct 11 04:26:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:10 compute-0 podman[77099]: 2025-10-11 04:26:10.457020505 +0000 UTC m=+0.030937401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb36cd641c1bb37fa79bd0f632372d828d57b2395a4f9db691e1edb4aa7837c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb36cd641c1bb37fa79bd0f632372d828d57b2395a4f9db691e1edb4aa7837c4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb36cd641c1bb37fa79bd0f632372d828d57b2395a4f9db691e1edb4aa7837c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:10 compute-0 podman[77123]: 2025-10-11 04:26:10.553406756 +0000 UTC m=+0.051842547 container exec 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:10 compute-0 podman[77099]: 2025-10-11 04:26:10.565322019 +0000 UTC m=+0.139238875 container init b14ca18e2a40f04d324c66536d2a0a46623554d15363a5b5ca836ac53dd7d21a (image=quay.io/ceph/ceph:v18, name=heuristic_elbakyan, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:10 compute-0 podman[77099]: 2025-10-11 04:26:10.570790162 +0000 UTC m=+0.144707018 container start b14ca18e2a40f04d324c66536d2a0a46623554d15363a5b5ca836ac53dd7d21a (image=quay.io/ceph/ceph:v18, name=heuristic_elbakyan, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:10 compute-0 podman[77099]: 2025-10-11 04:26:10.573902537 +0000 UTC m=+0.147819413 container attach b14ca18e2a40f04d324c66536d2a0a46623554d15363a5b5ca836ac53dd7d21a (image=quay.io/ceph/ceph:v18, name=heuristic_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 04:26:10 compute-0 podman[77123]: 2025-10-11 04:26:10.832658601 +0000 UTC m=+0.331094382 container exec_died 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:10 compute-0 sudo[76985]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:26:10 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:11 compute-0 sudo[77200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:11 compute-0 sudo[77200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:11 compute-0 sudo[77200]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Oct 11 04:26:11 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3082125722' entity='client.admin' 
Oct 11 04:26:11 compute-0 podman[77099]: 2025-10-11 04:26:11.110815125 +0000 UTC m=+0.684732001 container died b14ca18e2a40f04d324c66536d2a0a46623554d15363a5b5ca836ac53dd7d21a (image=quay.io/ceph/ceph:v18, name=heuristic_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 04:26:11 compute-0 systemd[1]: libpod-b14ca18e2a40f04d324c66536d2a0a46623554d15363a5b5ca836ac53dd7d21a.scope: Deactivated successfully.
Oct 11 04:26:11 compute-0 sudo[77225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:11 compute-0 sudo[77225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:11 compute-0 sudo[77225]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb36cd641c1bb37fa79bd0f632372d828d57b2395a4f9db691e1edb4aa7837c4-merged.mount: Deactivated successfully.
Oct 11 04:26:11 compute-0 podman[77099]: 2025-10-11 04:26:11.158996396 +0000 UTC m=+0.732913252 container remove b14ca18e2a40f04d324c66536d2a0a46623554d15363a5b5ca836ac53dd7d21a (image=quay.io/ceph/ceph:v18, name=heuristic_elbakyan, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:26:11 compute-0 systemd[1]: libpod-conmon-b14ca18e2a40f04d324c66536d2a0a46623554d15363a5b5ca836ac53dd7d21a.scope: Deactivated successfully.
Oct 11 04:26:11 compute-0 sudo[77264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:11 compute-0 sudo[77264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:11 compute-0 sudo[77264]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:11 compute-0 podman[77284]: 2025-10-11 04:26:11.226997562 +0000 UTC m=+0.044503914 container create f5fe046efcfc624804dd71388747531cff7ebaaceca287a0bc755f0b8fae9a19 (image=quay.io/ceph/ceph:v18, name=angry_meninsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:11 compute-0 systemd[1]: Started libpod-conmon-f5fe046efcfc624804dd71388747531cff7ebaaceca287a0bc755f0b8fae9a19.scope.
Oct 11 04:26:11 compute-0 sudo[77303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:26:11 compute-0 sudo[77303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841d6f0111939974db4053061f47e01e8f531aba15af1c5c5ef6997a40f118b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841d6f0111939974db4053061f47e01e8f531aba15af1c5c5ef6997a40f118b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841d6f0111939974db4053061f47e01e8f531aba15af1c5c5ef6997a40f118b0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:11 compute-0 podman[77284]: 2025-10-11 04:26:11.208725893 +0000 UTC m=+0.026232285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:26:11 compute-0 podman[77284]: 2025-10-11 04:26:11.306365351 +0000 UTC m=+0.123871713 container init f5fe046efcfc624804dd71388747531cff7ebaaceca287a0bc755f0b8fae9a19 (image=quay.io/ceph/ceph:v18, name=angry_meninsky, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:26:11 compute-0 podman[77284]: 2025-10-11 04:26:11.313008598 +0000 UTC m=+0.130514960 container start f5fe046efcfc624804dd71388747531cff7ebaaceca287a0bc755f0b8fae9a19 (image=quay.io/ceph/ceph:v18, name=angry_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 04:26:11 compute-0 podman[77284]: 2025-10-11 04:26:11.316178775 +0000 UTC m=+0.133685177 container attach f5fe046efcfc624804dd71388747531cff7ebaaceca287a0bc755f0b8fae9a19 (image=quay.io/ceph/ceph:v18, name=angry_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:26:11 compute-0 ceph-mon[74243]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:26:11 compute-0 ceph-mon[74243]: Saving service mgr spec with placement count:2
Oct 11 04:26:11 compute-0 ceph-mon[74243]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:26:11 compute-0 ceph-mon[74243]: Saving service crash spec with placement *
Oct 11 04:26:11 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:11 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:11 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3082125722' entity='client.admin' 
Oct 11 04:26:11 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77350 (sysctl)
Oct 11 04:26:11 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 11 04:26:11 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 11 04:26:11 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:26:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Oct 11 04:26:11 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:11 compute-0 sudo[77303]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:11 compute-0 systemd[1]: libpod-f5fe046efcfc624804dd71388747531cff7ebaaceca287a0bc755f0b8fae9a19.scope: Deactivated successfully.
Oct 11 04:26:11 compute-0 podman[77393]: 2025-10-11 04:26:11.924836449 +0000 UTC m=+0.031209080 container died f5fe046efcfc624804dd71388747531cff7ebaaceca287a0bc755f0b8fae9a19 (image=quay.io/ceph/ceph:v18, name=angry_meninsky, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:26:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-841d6f0111939974db4053061f47e01e8f531aba15af1c5c5ef6997a40f118b0-merged.mount: Deactivated successfully.
Oct 11 04:26:11 compute-0 podman[77393]: 2025-10-11 04:26:11.96605475 +0000 UTC m=+0.072427311 container remove f5fe046efcfc624804dd71388747531cff7ebaaceca287a0bc755f0b8fae9a19 (image=quay.io/ceph/ceph:v18, name=angry_meninsky, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:26:11 compute-0 systemd[1]: libpod-conmon-f5fe046efcfc624804dd71388747531cff7ebaaceca287a0bc755f0b8fae9a19.scope: Deactivated successfully.
Oct 11 04:26:11 compute-0 sudo[77403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:11 compute-0 sudo[77403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:11 compute-0 sudo[77403]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:12 compute-0 ceph-mgr[74542]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:26:12 compute-0 podman[77432]: 2025-10-11 04:26:12.038316124 +0000 UTC m=+0.044195873 container create a2a44bec555726beae65540e9167877ebebfa1563816fba9f0877cef58588c69 (image=quay.io/ceph/ceph:v18, name=awesome_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:12 compute-0 systemd[1]: Started libpod-conmon-a2a44bec555726beae65540e9167877ebebfa1563816fba9f0877cef58588c69.scope.
Oct 11 04:26:12 compute-0 sudo[77435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:12 compute-0 sudo[77435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:12 compute-0 sudo[77435]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:12 compute-0 podman[77432]: 2025-10-11 04:26:12.018510129 +0000 UTC m=+0.024389898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:26:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106b216e356f192bc1d761fe3b136a6f295a7d05f27ec8b11041f089fbc1adc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106b216e356f192bc1d761fe3b136a6f295a7d05f27ec8b11041f089fbc1adc5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106b216e356f192bc1d761fe3b136a6f295a7d05f27ec8b11041f089fbc1adc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:12 compute-0 podman[77432]: 2025-10-11 04:26:12.150616067 +0000 UTC m=+0.156495826 container init a2a44bec555726beae65540e9167877ebebfa1563816fba9f0877cef58588c69 (image=quay.io/ceph/ceph:v18, name=awesome_merkle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:12 compute-0 podman[77432]: 2025-10-11 04:26:12.16227122 +0000 UTC m=+0.168150949 container start a2a44bec555726beae65540e9167877ebebfa1563816fba9f0877cef58588c69 (image=quay.io/ceph/ceph:v18, name=awesome_merkle, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:26:12 compute-0 podman[77432]: 2025-10-11 04:26:12.165144627 +0000 UTC m=+0.171024366 container attach a2a44bec555726beae65540e9167877ebebfa1563816fba9f0877cef58588c69 (image=quay.io/ceph/ceph:v18, name=awesome_merkle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:12 compute-0 sudo[77477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:12 compute-0 sudo[77477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:12 compute-0 sudo[77477]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:12 compute-0 sudo[77504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 11 04:26:12 compute-0 sudo[77504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:12 compute-0 sudo[77504]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:26:12 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:12 compute-0 sudo[77546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:12 compute-0 sudo[77546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:12 compute-0 sudo[77546]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:12 compute-0 sudo[77590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:12 compute-0 sudo[77590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:12 compute-0 sudo[77590]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:12 compute-0 sudo[77615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:12 compute-0 sudo[77615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:12 compute-0 sudo[77615]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:12 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:26:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 11 04:26:12 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:12 compute-0 ceph-mgr[74542]: [cephadm INFO root] Added label _admin to host compute-0
Oct 11 04:26:12 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct 11 04:26:12 compute-0 awesome_merkle[77473]: Added label _admin to host compute-0
Oct 11 04:26:12 compute-0 systemd[1]: libpod-a2a44bec555726beae65540e9167877ebebfa1563816fba9f0877cef58588c69.scope: Deactivated successfully.
Oct 11 04:26:12 compute-0 podman[77432]: 2025-10-11 04:26:12.793866806 +0000 UTC m=+0.799746585 container died a2a44bec555726beae65540e9167877ebebfa1563816fba9f0877cef58588c69 (image=quay.io/ceph/ceph:v18, name=awesome_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:26:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-106b216e356f192bc1d761fe3b136a6f295a7d05f27ec8b11041f089fbc1adc5-merged.mount: Deactivated successfully.
Oct 11 04:26:12 compute-0 podman[77432]: 2025-10-11 04:26:12.843519621 +0000 UTC m=+0.849399360 container remove a2a44bec555726beae65540e9167877ebebfa1563816fba9f0877cef58588c69 (image=quay.io/ceph/ceph:v18, name=awesome_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:12 compute-0 ceph-mon[74243]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:26:12 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:12 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:12 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:12 compute-0 sudo[77640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- inventory --format=json-pretty --filter-for-batch
Oct 11 04:26:12 compute-0 sudo[77640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:12 compute-0 systemd[1]: libpod-conmon-a2a44bec555726beae65540e9167877ebebfa1563816fba9f0877cef58588c69.scope: Deactivated successfully.
Oct 11 04:26:12 compute-0 podman[77677]: 2025-10-11 04:26:12.925414513 +0000 UTC m=+0.054736104 container create 8a70637900728ef03088b58c40aa31847ff75485a512a12e3fb09dea16a91e1f (image=quay.io/ceph/ceph:v18, name=quizzical_curie, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:26:12 compute-0 systemd[1]: Started libpod-conmon-8a70637900728ef03088b58c40aa31847ff75485a512a12e3fb09dea16a91e1f.scope.
Oct 11 04:26:12 compute-0 podman[77677]: 2025-10-11 04:26:12.898994932 +0000 UTC m=+0.028316593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:26:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad2e5b9ef720220fdd822d76daaf6c1de598d0f10d523df7a6dffb1615bd68d5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad2e5b9ef720220fdd822d76daaf6c1de598d0f10d523df7a6dffb1615bd68d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad2e5b9ef720220fdd822d76daaf6c1de598d0f10d523df7a6dffb1615bd68d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:13 compute-0 podman[77677]: 2025-10-11 04:26:13.035885847 +0000 UTC m=+0.165207418 container init 8a70637900728ef03088b58c40aa31847ff75485a512a12e3fb09dea16a91e1f (image=quay.io/ceph/ceph:v18, name=quizzical_curie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:26:13 compute-0 podman[77677]: 2025-10-11 04:26:13.041966643 +0000 UTC m=+0.171288214 container start 8a70637900728ef03088b58c40aa31847ff75485a512a12e3fb09dea16a91e1f (image=quay.io/ceph/ceph:v18, name=quizzical_curie, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:13 compute-0 podman[77677]: 2025-10-11 04:26:13.047437937 +0000 UTC m=+0.176759558 container attach 8a70637900728ef03088b58c40aa31847ff75485a512a12e3fb09dea16a91e1f (image=quay.io/ceph/ceph:v18, name=quizzical_curie, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:13 compute-0 podman[77740]: 2025-10-11 04:26:13.178641521 +0000 UTC m=+0.040231485 container create b9249244649774122d7744e62b0411c5f8ccecb214d796aac95ebf5a11d7cbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_taussig, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:13 compute-0 systemd[1]: Started libpod-conmon-b9249244649774122d7744e62b0411c5f8ccecb214d796aac95ebf5a11d7cbce.scope.
Oct 11 04:26:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:13 compute-0 podman[77740]: 2025-10-11 04:26:13.253212632 +0000 UTC m=+0.114802546 container init b9249244649774122d7744e62b0411c5f8ccecb214d796aac95ebf5a11d7cbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:13 compute-0 podman[77740]: 2025-10-11 04:26:13.159012252 +0000 UTC m=+0.020602166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:13 compute-0 podman[77740]: 2025-10-11 04:26:13.257873805 +0000 UTC m=+0.119463669 container start b9249244649774122d7744e62b0411c5f8ccecb214d796aac95ebf5a11d7cbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_taussig, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:26:13 compute-0 eager_taussig[77756]: 167 167
Oct 11 04:26:13 compute-0 podman[77740]: 2025-10-11 04:26:13.261389236 +0000 UTC m=+0.122979200 container attach b9249244649774122d7744e62b0411c5f8ccecb214d796aac95ebf5a11d7cbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_taussig, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 04:26:13 compute-0 systemd[1]: libpod-b9249244649774122d7744e62b0411c5f8ccecb214d796aac95ebf5a11d7cbce.scope: Deactivated successfully.
Oct 11 04:26:13 compute-0 podman[77740]: 2025-10-11 04:26:13.262462716 +0000 UTC m=+0.124052610 container died b9249244649774122d7744e62b0411c5f8ccecb214d796aac95ebf5a11d7cbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_taussig, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:26:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-05ceaf7ac823e47cf224efe61c66698bceac46f6b25d0febac25476aad04694b-merged.mount: Deactivated successfully.
Oct 11 04:26:13 compute-0 podman[77740]: 2025-10-11 04:26:13.311222747 +0000 UTC m=+0.172812621 container remove b9249244649774122d7744e62b0411c5f8ccecb214d796aac95ebf5a11d7cbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_taussig, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:13 compute-0 systemd[1]: libpod-conmon-b9249244649774122d7744e62b0411c5f8ccecb214d796aac95ebf5a11d7cbce.scope: Deactivated successfully.
Oct 11 04:26:13 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Oct 11 04:26:13 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3322451582' entity='client.admin' 
Oct 11 04:26:13 compute-0 systemd[1]: libpod-8a70637900728ef03088b58c40aa31847ff75485a512a12e3fb09dea16a91e1f.scope: Deactivated successfully.
Oct 11 04:26:13 compute-0 podman[77677]: 2025-10-11 04:26:13.566021594 +0000 UTC m=+0.695343195 container died 8a70637900728ef03088b58c40aa31847ff75485a512a12e3fb09dea16a91e1f (image=quay.io/ceph/ceph:v18, name=quizzical_curie, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad2e5b9ef720220fdd822d76daaf6c1de598d0f10d523df7a6dffb1615bd68d5-merged.mount: Deactivated successfully.
Oct 11 04:26:13 compute-0 podman[77677]: 2025-10-11 04:26:13.603999415 +0000 UTC m=+0.733320986 container remove 8a70637900728ef03088b58c40aa31847ff75485a512a12e3fb09dea16a91e1f (image=quay.io/ceph/ceph:v18, name=quizzical_curie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:13 compute-0 systemd[1]: libpod-conmon-8a70637900728ef03088b58c40aa31847ff75485a512a12e3fb09dea16a91e1f.scope: Deactivated successfully.
Oct 11 04:26:13 compute-0 podman[77806]: 2025-10-11 04:26:13.671046136 +0000 UTC m=+0.044396731 container create 5ce6bf7d3f36dd95e3d467067fba32046e1de0b0929c45c0c2f5989acd299663 (image=quay.io/ceph/ceph:v18, name=zealous_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:13 compute-0 systemd[1]: Started libpod-conmon-5ce6bf7d3f36dd95e3d467067fba32046e1de0b0929c45c0c2f5989acd299663.scope.
Oct 11 04:26:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552aaa984f859a8f3c38d298d3a9a22540c395b808c137e96542de34c143ce01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552aaa984f859a8f3c38d298d3a9a22540c395b808c137e96542de34c143ce01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552aaa984f859a8f3c38d298d3a9a22540c395b808c137e96542de34c143ce01/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:13 compute-0 podman[77806]: 2025-10-11 04:26:13.745236692 +0000 UTC m=+0.118587317 container init 5ce6bf7d3f36dd95e3d467067fba32046e1de0b0929c45c0c2f5989acd299663 (image=quay.io/ceph/ceph:v18, name=zealous_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:26:13 compute-0 podman[77806]: 2025-10-11 04:26:13.654672217 +0000 UTC m=+0.028022832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:26:13 compute-0 podman[77806]: 2025-10-11 04:26:13.755994022 +0000 UTC m=+0.129344657 container start 5ce6bf7d3f36dd95e3d467067fba32046e1de0b0929c45c0c2f5989acd299663 (image=quay.io/ceph/ceph:v18, name=zealous_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 04:26:13 compute-0 podman[77806]: 2025-10-11 04:26:13.760604953 +0000 UTC m=+0.133955558 container attach 5ce6bf7d3f36dd95e3d467067fba32046e1de0b0929c45c0c2f5989acd299663 (image=quay.io/ceph/ceph:v18, name=zealous_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:13 compute-0 ceph-mon[74243]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:26:13 compute-0 ceph-mon[74243]: Added label _admin to host compute-0
Oct 11 04:26:13 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3322451582' entity='client.admin' 
Oct 11 04:26:14 compute-0 ceph-mgr[74542]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:26:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Oct 11 04:26:14 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4082115045' entity='client.admin' 
Oct 11 04:26:14 compute-0 zealous_buck[77822]: set mgr/dashboard/cluster/status
Oct 11 04:26:14 compute-0 systemd[1]: libpod-5ce6bf7d3f36dd95e3d467067fba32046e1de0b0929c45c0c2f5989acd299663.scope: Deactivated successfully.
Oct 11 04:26:14 compute-0 podman[77806]: 2025-10-11 04:26:14.416853155 +0000 UTC m=+0.790203780 container died 5ce6bf7d3f36dd95e3d467067fba32046e1de0b0929c45c0c2f5989acd299663 (image=quay.io/ceph/ceph:v18, name=zealous_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:26:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-552aaa984f859a8f3c38d298d3a9a22540c395b808c137e96542de34c143ce01-merged.mount: Deactivated successfully.
Oct 11 04:26:14 compute-0 podman[77806]: 2025-10-11 04:26:14.472857096 +0000 UTC m=+0.846207731 container remove 5ce6bf7d3f36dd95e3d467067fba32046e1de0b0929c45c0c2f5989acd299663 (image=quay.io/ceph/ceph:v18, name=zealous_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:26:14 compute-0 systemd[1]: libpod-conmon-5ce6bf7d3f36dd95e3d467067fba32046e1de0b0929c45c0c2f5989acd299663.scope: Deactivated successfully.
Oct 11 04:26:14 compute-0 sudo[73215]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:14 compute-0 podman[77868]: 2025-10-11 04:26:14.767171661 +0000 UTC m=+0.066556384 container create d453dcdffe8ac6bb5e3f9f4654c9815e64b1fc625cfcdd4ea1c4b28302d1216a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 04:26:14 compute-0 systemd[1]: Started libpod-conmon-d453dcdffe8ac6bb5e3f9f4654c9815e64b1fc625cfcdd4ea1c4b28302d1216a.scope.
Oct 11 04:26:14 compute-0 podman[77868]: 2025-10-11 04:26:14.739734151 +0000 UTC m=+0.039118914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:14 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6051d85f2e5838b7a1b90e84d83fa86904495e058967810948ab5882c67461/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6051d85f2e5838b7a1b90e84d83fa86904495e058967810948ab5882c67461/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6051d85f2e5838b7a1b90e84d83fa86904495e058967810948ab5882c67461/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6051d85f2e5838b7a1b90e84d83fa86904495e058967810948ab5882c67461/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:14 compute-0 podman[77868]: 2025-10-11 04:26:14.871067711 +0000 UTC m=+0.170452474 container init d453dcdffe8ac6bb5e3f9f4654c9815e64b1fc625cfcdd4ea1c4b28302d1216a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:14 compute-0 sudo[77911]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfakadfkfdmcenumqrkyhpfxyguvbpdp ; /usr/bin/python3'
Oct 11 04:26:14 compute-0 sudo[77911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:26:14 compute-0 podman[77868]: 2025-10-11 04:26:14.885063071 +0000 UTC m=+0.184447764 container start d453dcdffe8ac6bb5e3f9f4654c9815e64b1fc625cfcdd4ea1c4b28302d1216a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:26:14 compute-0 podman[77868]: 2025-10-11 04:26:14.88906491 +0000 UTC m=+0.188449673 container attach d453dcdffe8ac6bb5e3f9f4654c9815e64b1fc625cfcdd4ea1c4b28302d1216a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:26:15 compute-0 python3[77914]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:26:15 compute-0 podman[77916]: 2025-10-11 04:26:15.137054833 +0000 UTC m=+0.065452893 container create d566cd39bc3865ca57253549cd154536ac910839963d78ff8b1cfcfdb917a161 (image=quay.io/ceph/ceph:v18, name=strange_galois, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:15 compute-0 systemd[1]: Started libpod-conmon-d566cd39bc3865ca57253549cd154536ac910839963d78ff8b1cfcfdb917a161.scope.
Oct 11 04:26:15 compute-0 podman[77916]: 2025-10-11 04:26:15.106482997 +0000 UTC m=+0.034881117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:26:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e23a630d12ec724b0907b44593f0c527e6ed5bd3f71720f2caf552a26370ef79/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e23a630d12ec724b0907b44593f0c527e6ed5bd3f71720f2caf552a26370ef79/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:15 compute-0 podman[77916]: 2025-10-11 04:26:15.223669461 +0000 UTC m=+0.152067581 container init d566cd39bc3865ca57253549cd154536ac910839963d78ff8b1cfcfdb917a161 (image=quay.io/ceph/ceph:v18, name=strange_galois, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:15 compute-0 podman[77916]: 2025-10-11 04:26:15.233628241 +0000 UTC m=+0.162026301 container start d566cd39bc3865ca57253549cd154536ac910839963d78ff8b1cfcfdb917a161 (image=quay.io/ceph/ceph:v18, name=strange_galois, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:15 compute-0 podman[77916]: 2025-10-11 04:26:15.236938124 +0000 UTC m=+0.165336184 container attach d566cd39bc3865ca57253549cd154536ac910839963d78ff8b1cfcfdb917a161 (image=quay.io/ceph/ceph:v18, name=strange_galois, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:26:15 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4082115045' entity='client.admin' 
Oct 11 04:26:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Oct 11 04:26:15 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1118387021' entity='client.admin' 
Oct 11 04:26:15 compute-0 systemd[1]: libpod-d566cd39bc3865ca57253549cd154536ac910839963d78ff8b1cfcfdb917a161.scope: Deactivated successfully.
Oct 11 04:26:15 compute-0 podman[77916]: 2025-10-11 04:26:15.830621721 +0000 UTC m=+0.759019801 container died d566cd39bc3865ca57253549cd154536ac910839963d78ff8b1cfcfdb917a161 (image=quay.io/ceph/ceph:v18, name=strange_galois, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e23a630d12ec724b0907b44593f0c527e6ed5bd3f71720f2caf552a26370ef79-merged.mount: Deactivated successfully.
Oct 11 04:26:15 compute-0 podman[77916]: 2025-10-11 04:26:15.892065633 +0000 UTC m=+0.820463673 container remove d566cd39bc3865ca57253549cd154536ac910839963d78ff8b1cfcfdb917a161 (image=quay.io/ceph/ceph:v18, name=strange_galois, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:26:15 compute-0 systemd[1]: libpod-conmon-d566cd39bc3865ca57253549cd154536ac910839963d78ff8b1cfcfdb917a161.scope: Deactivated successfully.
Oct 11 04:26:15 compute-0 sudo[77911]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:16 compute-0 ceph-mgr[74542]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct 11 04:26:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:16 compute-0 ceph-mon[74243]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]: [
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:     {
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:         "available": false,
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:         "ceph_device": false,
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:         "lsm_data": {},
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:         "lvs": [],
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:         "path": "/dev/sr0",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:         "rejected_reasons": [
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "Insufficient space (<5GB)",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "Has a FileSystem"
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:         ],
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:         "sys_api": {
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "actuators": null,
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "device_nodes": "sr0",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "devname": "sr0",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "human_readable_size": "482.00 KB",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "id_bus": "ata",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "model": "QEMU DVD-ROM",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "nr_requests": "2",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "parent": "/dev/sr0",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "partitions": {},
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "path": "/dev/sr0",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "removable": "1",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "rev": "2.5+",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "ro": "0",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "rotational": "0",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "sas_address": "",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "sas_device_handle": "",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "scheduler_mode": "mq-deadline",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "sectors": 0,
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "sectorsize": "2048",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "size": 493568.0,
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "support_discard": "2048",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "type": "disk",
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:             "vendor": "QEMU"
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:         }
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]:     }
Oct 11 04:26:16 compute-0 awesome_aryabhata[77897]: ]
Oct 11 04:26:16 compute-0 podman[77868]: 2025-10-11 04:26:16.283567639 +0000 UTC m=+1.582952312 container died d453dcdffe8ac6bb5e3f9f4654c9815e64b1fc625cfcdd4ea1c4b28302d1216a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:16 compute-0 systemd[1]: libpod-d453dcdffe8ac6bb5e3f9f4654c9815e64b1fc625cfcdd4ea1c4b28302d1216a.scope: Deactivated successfully.
Oct 11 04:26:16 compute-0 systemd[1]: libpod-d453dcdffe8ac6bb5e3f9f4654c9815e64b1fc625cfcdd4ea1c4b28302d1216a.scope: Consumed 1.425s CPU time.
Oct 11 04:26:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b6051d85f2e5838b7a1b90e84d83fa86904495e058967810948ab5882c67461-merged.mount: Deactivated successfully.
Oct 11 04:26:16 compute-0 podman[77868]: 2025-10-11 04:26:16.343561358 +0000 UTC m=+1.642946051 container remove d453dcdffe8ac6bb5e3f9f4654c9815e64b1fc625cfcdd4ea1c4b28302d1216a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 04:26:16 compute-0 systemd[1]: libpod-conmon-d453dcdffe8ac6bb5e3f9f4654c9815e64b1fc625cfcdd4ea1c4b28302d1216a.scope: Deactivated successfully.
Oct 11 04:26:16 compute-0 sudo[77640]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:26:16 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:26:16 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:26:16 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:26:16 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 04:26:16 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:26:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:26:16 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:26:16 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:26:16 compute-0 ceph-mgr[74542]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 11 04:26:16 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 11 04:26:16 compute-0 sudo[79802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:16 compute-0 sudo[79802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:16 compute-0 sudo[79802]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:16 compute-0 sudo[79827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 11 04:26:16 compute-0 sudo[79827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:16 compute-0 sudo[79827]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:16 compute-0 sudo[79852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:16 compute-0 sudo[79852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:16 compute-0 sudo[79852]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:16 compute-0 sudo[79900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/etc/ceph
Oct 11 04:26:16 compute-0 sudo[79900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:16 compute-0 sudo[79900]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:16 compute-0 sudo[79950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:16 compute-0 sudo[79997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exmugcpiiztchdpemvivxatrlftnlftf ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760156776.2763844-32996-17066483029037/async_wrapper.py j230601263756 30 /home/zuul/.ansible/tmp/ansible-tmp-1760156776.2763844-32996-17066483029037/AnsiballZ_command.py _'
Oct 11 04:26:16 compute-0 sudo[79950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:16 compute-0 sudo[79997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:26:16 compute-0 sudo[79950]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:16 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1118387021' entity='client.admin' 
Oct 11 04:26:16 compute-0 ceph-mon[74243]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:16 compute-0 ceph-mon[74243]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 11 04:26:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:26:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:26:16 compute-0 sudo[80002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/etc/ceph/ceph.conf.new
Oct 11 04:26:16 compute-0 sudo[80002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:16 compute-0 sudo[80002]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:16 compute-0 sudo[80027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:16 compute-0 sudo[80027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:16 compute-0 sudo[80027]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:16 compute-0 ansible-async_wrapper.py[80001]: Invoked with j230601263756 30 /home/zuul/.ansible/tmp/ansible-tmp-1760156776.2763844-32996-17066483029037/AnsiballZ_command.py _
Oct 11 04:26:16 compute-0 ansible-async_wrapper.py[80058]: Starting module and watcher
Oct 11 04:26:16 compute-0 ansible-async_wrapper.py[80058]: Start watching 80061 (30)
Oct 11 04:26:16 compute-0 ansible-async_wrapper.py[80061]: Start module (80061)
Oct 11 04:26:16 compute-0 ansible-async_wrapper.py[80001]: Return async_wrapper task started.
Oct 11 04:26:16 compute-0 sudo[79997]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:16 compute-0 sudo[80052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:26:16 compute-0 sudo[80052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:17 compute-0 sudo[80052]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:17 compute-0 sudo[80082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:17 compute-0 sudo[80082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:17 compute-0 sudo[80082]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:17 compute-0 python3[80068]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:26:17 compute-0 sudo[80107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/etc/ceph/ceph.conf.new
Oct 11 04:26:17 compute-0 sudo[80107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:17 compute-0 sudo[80107]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:17 compute-0 podman[80110]: 2025-10-11 04:26:17.164654985 +0000 UTC m=+0.059548834 container create 9bc4ede0900790422d5f3b8c726edaa25bf3b581a79bfb04a0617db7d69014da (image=quay.io/ceph/ceph:v18, name=jovial_agnesi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:17 compute-0 systemd[1]: Started libpod-conmon-9bc4ede0900790422d5f3b8c726edaa25bf3b581a79bfb04a0617db7d69014da.scope.
Oct 11 04:26:17 compute-0 podman[80110]: 2025-10-11 04:26:17.135401518 +0000 UTC m=+0.030295407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:26:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb36c19c50cde6517081f905a7939574a05d593ecc1adcfeb35e42d97541e8fd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb36c19c50cde6517081f905a7939574a05d593ecc1adcfeb35e42d97541e8fd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:17 compute-0 podman[80110]: 2025-10-11 04:26:17.266624493 +0000 UTC m=+0.161518332 container init 9bc4ede0900790422d5f3b8c726edaa25bf3b581a79bfb04a0617db7d69014da (image=quay.io/ceph/ceph:v18, name=jovial_agnesi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:26:17 compute-0 podman[80110]: 2025-10-11 04:26:17.276607444 +0000 UTC m=+0.171501283 container start 9bc4ede0900790422d5f3b8c726edaa25bf3b581a79bfb04a0617db7d69014da (image=quay.io/ceph/ceph:v18, name=jovial_agnesi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:17 compute-0 podman[80110]: 2025-10-11 04:26:17.28160262 +0000 UTC m=+0.176496469 container attach 9bc4ede0900790422d5f3b8c726edaa25bf3b581a79bfb04a0617db7d69014da (image=quay.io/ceph/ceph:v18, name=jovial_agnesi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:26:17 compute-0 sudo[80173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:17 compute-0 sudo[80173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:17 compute-0 sudo[80173]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:17 compute-0 sudo[80199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/etc/ceph/ceph.conf.new
Oct 11 04:26:17 compute-0 sudo[80199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:17 compute-0 sudo[80199]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:17 compute-0 sudo[80224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:17 compute-0 sudo[80224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:17 compute-0 sudo[80224]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:17 compute-0 sudo[80249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/etc/ceph/ceph.conf.new
Oct 11 04:26:17 compute-0 sudo[80249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:17 compute-0 sudo[80249]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:17 compute-0 sudo[80293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:17 compute-0 sudo[80293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:17 compute-0 sudo[80293]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:17 compute-0 sudo[80318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 11 04:26:17 compute-0 sudo[80318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:17 compute-0 sudo[80318]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:17 compute-0 ceph-mgr[74542]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config/ceph.conf
Oct 11 04:26:17 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config/ceph.conf
Oct 11 04:26:17 compute-0 ceph-mon[74243]: Updating compute-0:/etc/ceph/ceph.conf
Oct 11 04:26:17 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 04:26:17 compute-0 jovial_agnesi[80163]: 
Oct 11 04:26:17 compute-0 jovial_agnesi[80163]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 11 04:26:17 compute-0 systemd[1]: libpod-9bc4ede0900790422d5f3b8c726edaa25bf3b581a79bfb04a0617db7d69014da.scope: Deactivated successfully.
Oct 11 04:26:17 compute-0 conmon[80163]: conmon 9bc4ede0900790422d5f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9bc4ede0900790422d5f3b8c726edaa25bf3b581a79bfb04a0617db7d69014da.scope/container/memory.events
Oct 11 04:26:17 compute-0 podman[80110]: 2025-10-11 04:26:17.845820052 +0000 UTC m=+0.740713891 container died 9bc4ede0900790422d5f3b8c726edaa25bf3b581a79bfb04a0617db7d69014da (image=quay.io/ceph/ceph:v18, name=jovial_agnesi, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:17 compute-0 sudo[80343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:17 compute-0 sudo[80343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:17 compute-0 sudo[80343]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb36c19c50cde6517081f905a7939574a05d593ecc1adcfeb35e42d97541e8fd-merged.mount: Deactivated successfully.
Oct 11 04:26:17 compute-0 podman[80110]: 2025-10-11 04:26:17.896990663 +0000 UTC m=+0.791884512 container remove 9bc4ede0900790422d5f3b8c726edaa25bf3b581a79bfb04a0617db7d69014da (image=quay.io/ceph/ceph:v18, name=jovial_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:17 compute-0 systemd[1]: libpod-conmon-9bc4ede0900790422d5f3b8c726edaa25bf3b581a79bfb04a0617db7d69014da.scope: Deactivated successfully.
Oct 11 04:26:17 compute-0 ansible-async_wrapper.py[80061]: Module complete (80061)
Oct 11 04:26:17 compute-0 sudo[80381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config
Oct 11 04:26:17 compute-0 sudo[80381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:17 compute-0 sudo[80381]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:17 compute-0 sudo[80406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:18 compute-0 sudo[80406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:18 compute-0 sudo[80406]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:18 compute-0 sudo[80443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config
Oct 11 04:26:18 compute-0 sudo[80443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:18 compute-0 sudo[80443]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:18 compute-0 sudo[80479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:18 compute-0 sudo[80479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:18 compute-0 sudo[80479]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:18 compute-0 sudo[80543]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geerwtcmswbfazwnduvwcumayructjkk ; /usr/bin/python3'
Oct 11 04:26:18 compute-0 sudo[80543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:26:18 compute-0 sudo[80510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config/ceph.conf.new
Oct 11 04:26:18 compute-0 sudo[80510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:18 compute-0 sudo[80510]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:18 compute-0 sudo[80555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:18 compute-0 sudo[80555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:18 compute-0 sudo[80555]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:18 compute-0 python3[80552]: ansible-ansible.legacy.async_status Invoked with jid=j230601263756.80001 mode=status _async_dir=/root/.ansible_async
Oct 11 04:26:18 compute-0 sudo[80543]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:18 compute-0 sudo[80580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:26:18 compute-0 sudo[80580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:18 compute-0 sudo[80580]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:18 compute-0 sudo[80628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:18 compute-0 sudo[80628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:18 compute-0 sudo[80628]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:18 compute-0 sudo[80676]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enrcnhqqixjreahyydcqryagijmtdfim ; /usr/bin/python3'
Oct 11 04:26:18 compute-0 sudo[80676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:26:18 compute-0 sudo[80677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config/ceph.conf.new
Oct 11 04:26:18 compute-0 sudo[80677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:18 compute-0 sudo[80677]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:18 compute-0 python3[80682]: ansible-ansible.legacy.async_status Invoked with jid=j230601263756.80001 mode=cleanup _async_dir=/root/.ansible_async
Oct 11 04:26:18 compute-0 sudo[80676]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:18 compute-0 sudo[80727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:18 compute-0 sudo[80727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:18 compute-0 sudo[80727]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:18 compute-0 sudo[80752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config/ceph.conf.new
Oct 11 04:26:18 compute-0 sudo[80752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:18 compute-0 sudo[80752]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:18 compute-0 sudo[80777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:18 compute-0 sudo[80777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:18 compute-0 sudo[80777]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:18 compute-0 ceph-mon[74243]: Updating compute-0:/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config/ceph.conf
Oct 11 04:26:18 compute-0 ceph-mon[74243]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 04:26:18 compute-0 ceph-mon[74243]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:18 compute-0 sudo[80802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config/ceph.conf.new
Oct 11 04:26:18 compute-0 sudo[80802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:18 compute-0 sudo[80802]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:18 compute-0 sudo[80863]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcsujsjaugfralgpxlakxvglvhilvkus ; /usr/bin/python3'
Oct 11 04:26:18 compute-0 sudo[80863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:26:18 compute-0 sudo[80835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:18 compute-0 sudo[80835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:18 compute-0 sudo[80835]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:18 compute-0 sudo[80878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config/ceph.conf.new /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config/ceph.conf
Oct 11 04:26:18 compute-0 sudo[80878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:18 compute-0 sudo[80878]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:18 compute-0 ceph-mgr[74542]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 11 04:26:18 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 11 04:26:19 compute-0 sudo[80903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:19 compute-0 sudo[80903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:19 compute-0 sudo[80903]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:19 compute-0 python3[80875]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:26:19 compute-0 sudo[80928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 11 04:26:19 compute-0 sudo[80928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:19 compute-0 sudo[80928]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:19 compute-0 sudo[80863]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:19 compute-0 sudo[80955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:19 compute-0 sudo[80955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:19 compute-0 sudo[80955]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:19 compute-0 sudo[80980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/etc/ceph
Oct 11 04:26:19 compute-0 sudo[80980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:19 compute-0 sudo[80980]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:19 compute-0 sudo[81005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:19 compute-0 sudo[81005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:19 compute-0 sudo[81005]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:19 compute-0 sudo[81030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/etc/ceph/ceph.client.admin.keyring.new
Oct 11 04:26:19 compute-0 sudo[81030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:19 compute-0 sudo[81030]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:19 compute-0 sudo[81055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:19 compute-0 sudo[81055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:19 compute-0 sudo[81055]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:19 compute-0 sudo[81116]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvkcgigbhcgcykjitpziznwhehovgtaz ; /usr/bin/python3'
Oct 11 04:26:19 compute-0 sudo[81116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:26:19 compute-0 sudo[81088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:26:19 compute-0 sudo[81088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:19 compute-0 sudo[81088]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:19 compute-0 sudo[81131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:19 compute-0 sudo[81131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:19 compute-0 sudo[81131]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:19 compute-0 python3[81128]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:26:19 compute-0 sudo[81156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/etc/ceph/ceph.client.admin.keyring.new
Oct 11 04:26:19 compute-0 sudo[81156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:19 compute-0 sudo[81156]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:19 compute-0 podman[81163]: 2025-10-11 04:26:19.597394228 +0000 UTC m=+0.071697744 container create 8434a829851919563b11e77b1e56b29bfa580cc59dfec4d5248ae0e0d86ee865 (image=quay.io/ceph/ceph:v18, name=nifty_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:26:19 compute-0 systemd[1]: Started libpod-conmon-8434a829851919563b11e77b1e56b29bfa580cc59dfec4d5248ae0e0d86ee865.scope.
Oct 11 04:26:19 compute-0 podman[81163]: 2025-10-11 04:26:19.566703868 +0000 UTC m=+0.041007434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:26:19 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d592a8de815c1d247613bab2c3e610d721ff4ed7f9d8b3f712d5856316d5e6d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d592a8de815c1d247613bab2c3e610d721ff4ed7f9d8b3f712d5856316d5e6d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d592a8de815c1d247613bab2c3e610d721ff4ed7f9d8b3f712d5856316d5e6d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:19 compute-0 podman[81163]: 2025-10-11 04:26:19.680624551 +0000 UTC m=+0.154928117 container init 8434a829851919563b11e77b1e56b29bfa580cc59dfec4d5248ae0e0d86ee865 (image=quay.io/ceph/ceph:v18, name=nifty_yalow, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 04:26:19 compute-0 podman[81163]: 2025-10-11 04:26:19.691432972 +0000 UTC m=+0.165736438 container start 8434a829851919563b11e77b1e56b29bfa580cc59dfec4d5248ae0e0d86ee865 (image=quay.io/ceph/ceph:v18, name=nifty_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:19 compute-0 podman[81163]: 2025-10-11 04:26:19.694529487 +0000 UTC m=+0.168833003 container attach 8434a829851919563b11e77b1e56b29bfa580cc59dfec4d5248ae0e0d86ee865 (image=quay.io/ceph/ceph:v18, name=nifty_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:26:19 compute-0 sudo[81220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:19 compute-0 sudo[81220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:19 compute-0 sudo[81220]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:19 compute-0 sudo[81249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/etc/ceph/ceph.client.admin.keyring.new
Oct 11 04:26:19 compute-0 sudo[81249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:19 compute-0 sudo[81249]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:19 compute-0 ceph-mon[74243]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 11 04:26:19 compute-0 sudo[81274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:19 compute-0 sudo[81274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:19 compute-0 sudo[81274]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:19 compute-0 sudo[81299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/etc/ceph/ceph.client.admin.keyring.new
Oct 11 04:26:19 compute-0 sudo[81299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:19 compute-0 sudo[81299]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:19 compute-0 sudo[81324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:19 compute-0 sudo[81324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:19 compute-0 sudo[81324]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:20 compute-0 sudo[81366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 11 04:26:20 compute-0 sudo[81366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:20 compute-0 sudo[81366]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:20 compute-0 ceph-mgr[74542]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config/ceph.client.admin.keyring
Oct 11 04:26:20 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config/ceph.client.admin.keyring
Oct 11 04:26:20 compute-0 sudo[81393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:20 compute-0 sudo[81393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:20 compute-0 sudo[81393]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:20 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 04:26:20 compute-0 nifty_yalow[81221]: 
Oct 11 04:26:20 compute-0 nifty_yalow[81221]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 11 04:26:20 compute-0 sudo[81418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config
Oct 11 04:26:20 compute-0 sudo[81418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:20 compute-0 sudo[81418]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:20 compute-0 systemd[1]: libpod-8434a829851919563b11e77b1e56b29bfa580cc59dfec4d5248ae0e0d86ee865.scope: Deactivated successfully.
Oct 11 04:26:20 compute-0 podman[81163]: 2025-10-11 04:26:20.221462095 +0000 UTC m=+0.695765611 container died 8434a829851919563b11e77b1e56b29bfa580cc59dfec4d5248ae0e0d86ee865 (image=quay.io/ceph/ceph:v18, name=nifty_yalow, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d592a8de815c1d247613bab2c3e610d721ff4ed7f9d8b3f712d5856316d5e6d-merged.mount: Deactivated successfully.
Oct 11 04:26:20 compute-0 podman[81163]: 2025-10-11 04:26:20.273266499 +0000 UTC m=+0.747569985 container remove 8434a829851919563b11e77b1e56b29bfa580cc59dfec4d5248ae0e0d86ee865 (image=quay.io/ceph/ceph:v18, name=nifty_yalow, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:26:20 compute-0 systemd[1]: libpod-conmon-8434a829851919563b11e77b1e56b29bfa580cc59dfec4d5248ae0e0d86ee865.scope: Deactivated successfully.
Oct 11 04:26:20 compute-0 sudo[81116]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:20 compute-0 sudo[81446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:20 compute-0 sudo[81446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:20 compute-0 sudo[81446]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:20 compute-0 sudo[81481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config
Oct 11 04:26:20 compute-0 sudo[81481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:20 compute-0 sudo[81481]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:20 compute-0 sudo[81506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:20 compute-0 sudo[81506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:20 compute-0 sudo[81506]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:20 compute-0 sudo[81531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config/ceph.client.admin.keyring.new
Oct 11 04:26:20 compute-0 sudo[81531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:20 compute-0 sudo[81531]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:20 compute-0 sudo[81556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:20 compute-0 sudo[81556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:20 compute-0 sudo[81556]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:20 compute-0 sudo[81604]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqsxajyqebwklvyhdfyrahresntfpwgg ; /usr/bin/python3'
Oct 11 04:26:20 compute-0 sudo[81604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:26:20 compute-0 sudo[81605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:26:20 compute-0 sudo[81605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:20 compute-0 sudo[81605]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:20 compute-0 sudo[81632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:20 compute-0 sudo[81632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:20 compute-0 sudo[81632]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:20 compute-0 ceph-mon[74243]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:20 compute-0 ceph-mon[74243]: Updating compute-0:/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config/ceph.client.admin.keyring
Oct 11 04:26:20 compute-0 ceph-mon[74243]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 04:26:20 compute-0 python3[81610]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:26:20 compute-0 sudo[81657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config/ceph.client.admin.keyring.new
Oct 11 04:26:20 compute-0 sudo[81657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:20 compute-0 sudo[81657]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:20 compute-0 podman[81680]: 2025-10-11 04:26:20.934108592 +0000 UTC m=+0.064555500 container create ce6fbb4fb4dea3ced11cb15b59357970582977ed8e7b991e505c0e8495152449 (image=quay.io/ceph/ceph:v18, name=quirky_banzai, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:20 compute-0 systemd[1]: Started libpod-conmon-ce6fbb4fb4dea3ced11cb15b59357970582977ed8e7b991e505c0e8495152449.scope.
Oct 11 04:26:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:21 compute-0 sudo[81718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca4bbb6fe60f11e40213be637ef4ab2a953da00140dff380df91a2f06c4181d6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca4bbb6fe60f11e40213be637ef4ab2a953da00140dff380df91a2f06c4181d6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca4bbb6fe60f11e40213be637ef4ab2a953da00140dff380df91a2f06c4181d6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:21 compute-0 podman[81680]: 2025-10-11 04:26:20.911844195 +0000 UTC m=+0.042291153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:26:21 compute-0 sudo[81718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:21 compute-0 sudo[81718]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:21 compute-0 podman[81680]: 2025-10-11 04:26:21.02261921 +0000 UTC m=+0.153066158 container init ce6fbb4fb4dea3ced11cb15b59357970582977ed8e7b991e505c0e8495152449 (image=quay.io/ceph/ceph:v18, name=quirky_banzai, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:26:21 compute-0 podman[81680]: 2025-10-11 04:26:21.031536892 +0000 UTC m=+0.161983820 container start ce6fbb4fb4dea3ced11cb15b59357970582977ed8e7b991e505c0e8495152449 (image=quay.io/ceph/ceph:v18, name=quirky_banzai, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:26:21 compute-0 podman[81680]: 2025-10-11 04:26:21.035352533 +0000 UTC m=+0.165799451 container attach ce6fbb4fb4dea3ced11cb15b59357970582977ed8e7b991e505c0e8495152449 (image=quay.io/ceph/ceph:v18, name=quirky_banzai, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:26:21 compute-0 sudo[81748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config/ceph.client.admin.keyring.new
Oct 11 04:26:21 compute-0 sudo[81748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:21 compute-0 sudo[81748]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:21 compute-0 sudo[81774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:21 compute-0 sudo[81774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:21 compute-0 sudo[81774]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:21 compute-0 sudo[81799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config/ceph.client.admin.keyring.new
Oct 11 04:26:21 compute-0 sudo[81799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:21 compute-0 sudo[81799]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:21 compute-0 sudo[81824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:21 compute-0 sudo[81824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:21 compute-0 sudo[81824]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:21 compute-0 sudo[81868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-166d0489-2ae7-59eb-961c-c1b5cda4b45a/var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config/ceph.client.admin.keyring.new /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/config/ceph.client.admin.keyring
Oct 11 04:26:21 compute-0 sudo[81868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:21 compute-0 sudo[81868]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:21 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:26:21 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:21 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:26:21 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:21 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:26:21 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:21 compute-0 ceph-mgr[74542]: [progress INFO root] update: starting ev 34a22a66-b145-4d6b-9e40-b63b9ee78155 (Updating crash deployment (+1 -> 1))
Oct 11 04:26:21 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct 11 04:26:21 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 11 04:26:21 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 11 04:26:21 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:26:21 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:21 compute-0 ceph-mgr[74542]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct 11 04:26:21 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct 11 04:26:21 compute-0 sudo[81893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:21 compute-0 sudo[81893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:21 compute-0 sudo[81893]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:21 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Oct 11 04:26:21 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3177943489' entity='client.admin' 
Oct 11 04:26:21 compute-0 systemd[1]: libpod-ce6fbb4fb4dea3ced11cb15b59357970582977ed8e7b991e505c0e8495152449.scope: Deactivated successfully.
Oct 11 04:26:21 compute-0 podman[81680]: 2025-10-11 04:26:21.587880002 +0000 UTC m=+0.718326950 container died ce6fbb4fb4dea3ced11cb15b59357970582977ed8e7b991e505c0e8495152449 (image=quay.io/ceph/ceph:v18, name=quirky_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca4bbb6fe60f11e40213be637ef4ab2a953da00140dff380df91a2f06c4181d6-merged.mount: Deactivated successfully.
Oct 11 04:26:21 compute-0 sudo[81919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:21 compute-0 sudo[81919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:21 compute-0 sudo[81919]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:21 compute-0 podman[81680]: 2025-10-11 04:26:21.649855324 +0000 UTC m=+0.780302242 container remove ce6fbb4fb4dea3ced11cb15b59357970582977ed8e7b991e505c0e8495152449 (image=quay.io/ceph/ceph:v18, name=quirky_banzai, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:21 compute-0 systemd[1]: libpod-conmon-ce6fbb4fb4dea3ced11cb15b59357970582977ed8e7b991e505c0e8495152449.scope: Deactivated successfully.
Oct 11 04:26:21 compute-0 sudo[81604]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:21 compute-0 sudo[81956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:21 compute-0 sudo[81956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:21 compute-0 sudo[81956]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:21 compute-0 sudo[81981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:26:21 compute-0 sudo[81981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:21 compute-0 sudo[82028]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lafdcginmuttegarrpgektgcbqetzjck ; /usr/bin/python3'
Oct 11 04:26:21 compute-0 sudo[82028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:26:21 compute-0 ansible-async_wrapper.py[80058]: Done in kid B.
Oct 11 04:26:22 compute-0 python3[82031]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:26:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:22 compute-0 podman[82047]: 2025-10-11 04:26:22.086794598 +0000 UTC m=+0.057011269 container create f2743ba1c78b95cd45803991e531aae084399dee0ad4bfa7f96fe1e9c787f8aa (image=quay.io/ceph/ceph:v18, name=competent_darwin, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:26:22 compute-0 systemd[1]: Started libpod-conmon-f2743ba1c78b95cd45803991e531aae084399dee0ad4bfa7f96fe1e9c787f8aa.scope.
Oct 11 04:26:22 compute-0 podman[82047]: 2025-10-11 04:26:22.06155235 +0000 UTC m=+0.031769061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:26:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f28088852f7ecde7861e8397431b1715aaa94d61585e50ab8f1fb9f50f864c46/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f28088852f7ecde7861e8397431b1715aaa94d61585e50ab8f1fb9f50f864c46/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f28088852f7ecde7861e8397431b1715aaa94d61585e50ab8f1fb9f50f864c46/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:22 compute-0 podman[82047]: 2025-10-11 04:26:22.196849103 +0000 UTC m=+0.167065794 container init f2743ba1c78b95cd45803991e531aae084399dee0ad4bfa7f96fe1e9c787f8aa (image=quay.io/ceph/ceph:v18, name=competent_darwin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:26:22 compute-0 podman[82047]: 2025-10-11 04:26:22.202628547 +0000 UTC m=+0.172845218 container start f2743ba1c78b95cd45803991e531aae084399dee0ad4bfa7f96fe1e9c787f8aa (image=quay.io/ceph/ceph:v18, name=competent_darwin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:26:22 compute-0 podman[82047]: 2025-10-11 04:26:22.205661112 +0000 UTC m=+0.175877773 container attach f2743ba1c78b95cd45803991e531aae084399dee0ad4bfa7f96fe1e9c787f8aa (image=quay.io/ceph/ceph:v18, name=competent_darwin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:22 compute-0 podman[82091]: 2025-10-11 04:26:22.234276324 +0000 UTC m=+0.064318577 container create d11ecaf9d9b51f3aab8be15e55e6ed7c70d1891ab154fa3b4edf866f681f88ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:26:22 compute-0 systemd[1]: Started libpod-conmon-d11ecaf9d9b51f3aab8be15e55e6ed7c70d1891ab154fa3b4edf866f681f88ff.scope.
Oct 11 04:26:22 compute-0 podman[82091]: 2025-10-11 04:26:22.208417461 +0000 UTC m=+0.038459754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:22 compute-0 podman[82091]: 2025-10-11 04:26:22.331966415 +0000 UTC m=+0.162008758 container init d11ecaf9d9b51f3aab8be15e55e6ed7c70d1891ab154fa3b4edf866f681f88ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_grothendieck, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 04:26:22 compute-0 podman[82091]: 2025-10-11 04:26:22.341323558 +0000 UTC m=+0.171365841 container start d11ecaf9d9b51f3aab8be15e55e6ed7c70d1891ab154fa3b4edf866f681f88ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_grothendieck, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:22 compute-0 systemd[1]: libpod-d11ecaf9d9b51f3aab8be15e55e6ed7c70d1891ab154fa3b4edf866f681f88ff.scope: Deactivated successfully.
Oct 11 04:26:22 compute-0 sharp_grothendieck[82109]: 167 167
Oct 11 04:26:22 compute-0 conmon[82109]: conmon d11ecaf9d9b51f3aab8b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d11ecaf9d9b51f3aab8be15e55e6ed7c70d1891ab154fa3b4edf866f681f88ff.scope/container/memory.events
Oct 11 04:26:22 compute-0 podman[82091]: 2025-10-11 04:26:22.35508282 +0000 UTC m=+0.185125163 container attach d11ecaf9d9b51f3aab8be15e55e6ed7c70d1891ab154fa3b4edf866f681f88ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:26:22 compute-0 podman[82091]: 2025-10-11 04:26:22.356447824 +0000 UTC m=+0.186490097 container died d11ecaf9d9b51f3aab8be15e55e6ed7c70d1891ab154fa3b4edf866f681f88ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_grothendieck, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5135db4e5f399dc601fbec81e366f67e3094cceccb740aa0e233f22b0480508-merged.mount: Deactivated successfully.
Oct 11 04:26:22 compute-0 podman[82091]: 2025-10-11 04:26:22.41854783 +0000 UTC m=+0.248590113 container remove d11ecaf9d9b51f3aab8be15e55e6ed7c70d1891ab154fa3b4edf866f681f88ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:26:22 compute-0 systemd[1]: libpod-conmon-d11ecaf9d9b51f3aab8be15e55e6ed7c70d1891ab154fa3b4edf866f681f88ff.scope: Deactivated successfully.
Oct 11 04:26:22 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:22 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:22 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:22 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 11 04:26:22 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 11 04:26:22 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:22 compute-0 ceph-mon[74243]: Deploying daemon crash.compute-0 on compute-0
Oct 11 04:26:22 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3177943489' entity='client.admin' 
Oct 11 04:26:22 compute-0 ceph-mon[74243]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:22 compute-0 systemd[1]: Reloading.
Oct 11 04:26:22 compute-0 systemd-sysv-generator[82177]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:26:22 compute-0 systemd-rc-local-generator[82173]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:26:22 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Oct 11 04:26:22 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3003977776' entity='client.admin' 
Oct 11 04:26:22 compute-0 podman[82047]: 2025-10-11 04:26:22.747753141 +0000 UTC m=+0.717969872 container died f2743ba1c78b95cd45803991e531aae084399dee0ad4bfa7f96fe1e9c787f8aa (image=quay.io/ceph/ceph:v18, name=competent_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:26:22 compute-0 systemd[1]: libpod-f2743ba1c78b95cd45803991e531aae084399dee0ad4bfa7f96fe1e9c787f8aa.scope: Deactivated successfully.
Oct 11 04:26:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f28088852f7ecde7861e8397431b1715aaa94d61585e50ab8f1fb9f50f864c46-merged.mount: Deactivated successfully.
Oct 11 04:26:22 compute-0 podman[82047]: 2025-10-11 04:26:22.852215561 +0000 UTC m=+0.822432232 container remove f2743ba1c78b95cd45803991e531aae084399dee0ad4bfa7f96fe1e9c787f8aa (image=quay.io/ceph/ceph:v18, name=competent_darwin, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:26:22 compute-0 systemd[1]: libpod-conmon-f2743ba1c78b95cd45803991e531aae084399dee0ad4bfa7f96fe1e9c787f8aa.scope: Deactivated successfully.
Oct 11 04:26:22 compute-0 systemd[1]: Reloading.
Oct 11 04:26:22 compute-0 sudo[82028]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:22 compute-0 systemd-rc-local-generator[82230]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:26:22 compute-0 systemd-sysv-generator[82234]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:26:23 compute-0 sudo[82262]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxoiukocrelbpmsyclbqcdoopjovptjy ; /usr/bin/python3'
Oct 11 04:26:23 compute-0 sudo[82262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:26:23 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 166d0489-2ae7-59eb-961c-c1b5cda4b45a...
Oct 11 04:26:23 compute-0 python3[82266]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:26:23 compute-0 podman[82291]: 2025-10-11 04:26:23.395974742 +0000 UTC m=+0.074192497 container create 31d7e726ba8cb71a444998b38802b55902d7f9c23948432ef0cc90478b9dbeab (image=quay.io/ceph/ceph:v18, name=jolly_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:26:23 compute-0 systemd[1]: Started libpod-conmon-31d7e726ba8cb71a444998b38802b55902d7f9c23948432ef0cc90478b9dbeab.scope.
Oct 11 04:26:23 compute-0 podman[82291]: 2025-10-11 04:26:23.366422977 +0000 UTC m=+0.044640802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:26:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72122a0da7ba9ed372327675d18da47efa6a22efb097ed3dd1fae63f4e0557d4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72122a0da7ba9ed372327675d18da47efa6a22efb097ed3dd1fae63f4e0557d4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72122a0da7ba9ed372327675d18da47efa6a22efb097ed3dd1fae63f4e0557d4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:23 compute-0 podman[82291]: 2025-10-11 04:26:23.496098254 +0000 UTC m=+0.174316089 container init 31d7e726ba8cb71a444998b38802b55902d7f9c23948432ef0cc90478b9dbeab (image=quay.io/ceph/ceph:v18, name=jolly_turing, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:26:23 compute-0 podman[82291]: 2025-10-11 04:26:23.508415551 +0000 UTC m=+0.186633346 container start 31d7e726ba8cb71a444998b38802b55902d7f9c23948432ef0cc90478b9dbeab (image=quay.io/ceph/ceph:v18, name=jolly_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:26:23 compute-0 podman[82291]: 2025-10-11 04:26:23.514375239 +0000 UTC m=+0.192593024 container attach 31d7e726ba8cb71a444998b38802b55902d7f9c23948432ef0cc90478b9dbeab (image=quay.io/ceph/ceph:v18, name=jolly_turing, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:26:23 compute-0 podman[82328]: 2025-10-11 04:26:23.523460805 +0000 UTC m=+0.075588852 container create 3e54e7d387a60b668723e60ce28c8d603316f199a76a774b97881580b197ee8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-crash-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d676347aedc54b6aadeab4b471a7ddd9dc57f0dc05c64b9ae6202128c207fff6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d676347aedc54b6aadeab4b471a7ddd9dc57f0dc05c64b9ae6202128c207fff6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d676347aedc54b6aadeab4b471a7ddd9dc57f0dc05c64b9ae6202128c207fff6/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d676347aedc54b6aadeab4b471a7ddd9dc57f0dc05c64b9ae6202128c207fff6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:23 compute-0 podman[82328]: 2025-10-11 04:26:23.495053398 +0000 UTC m=+0.047181525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:23 compute-0 podman[82328]: 2025-10-11 04:26:23.614843149 +0000 UTC m=+0.166971206 container init 3e54e7d387a60b668723e60ce28c8d603316f199a76a774b97881580b197ee8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:26:23 compute-0 podman[82328]: 2025-10-11 04:26:23.620873109 +0000 UTC m=+0.173001156 container start 3e54e7d387a60b668723e60ce28c8d603316f199a76a774b97881580b197ee8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:26:23 compute-0 bash[82328]: 3e54e7d387a60b668723e60ce28c8d603316f199a76a774b97881580b197ee8a
Oct 11 04:26:23 compute-0 systemd[1]: Started Ceph crash.compute-0 for 166d0489-2ae7-59eb-961c-c1b5cda4b45a.
Oct 11 04:26:23 compute-0 sudo[81981]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:26:23 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:26:23 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 11 04:26:23 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:23 compute-0 ceph-mgr[74542]: [progress INFO root] complete: finished ev 34a22a66-b145-4d6b-9e40-b63b9ee78155 (Updating crash deployment (+1 -> 1))
Oct 11 04:26:23 compute-0 ceph-mgr[74542]: [progress INFO root] Completed event 34a22a66-b145-4d6b-9e40-b63b9ee78155 (Updating crash deployment (+1 -> 1)) in 2 seconds
Oct 11 04:26:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 11 04:26:23 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:23 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev f286df0c-5fd5-4272-9977-0ed4f0737cff does not exist
Oct 11 04:26:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 11 04:26:23 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3003977776' entity='client.admin' 
Oct 11 04:26:23 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:23 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:23 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:23 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:23 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:23 compute-0 ceph-mgr[74542]: [progress INFO root] update: starting ev fcdbc63a-e202-4fe5-b084-7dac6e2c8547 (Updating mgr deployment (+1 -> 2))
Oct 11 04:26:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.jmfijt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 11 04:26:23 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jmfijt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 11 04:26:23 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jmfijt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 11 04:26:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 11 04:26:23 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 04:26:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:26:23 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:23 compute-0 ceph-mgr[74542]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.jmfijt on compute-0
Oct 11 04:26:23 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.jmfijt on compute-0
Oct 11 04:26:23 compute-0 sudo[82352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:23 compute-0 sudo[82352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:23 compute-0 sudo[82352]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:23 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-crash-compute-0[82347]: INFO:ceph-crash:pinging cluster to exercise our key
Oct 11 04:26:23 compute-0 sudo[82377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:23 compute-0 sudo[82377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:23 compute-0 sudo[82377]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:23 compute-0 sudo[82423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:23 compute-0 sudo[82423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:23 compute-0 sudo[82423]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:24 compute-0 sudo[82448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:26:24 compute-0 sudo[82448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:24 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-crash-compute-0[82347]: 2025-10-11T04:26:24.057+0000 7fc2dab5f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 11 04:26:24 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-crash-compute-0[82347]: 2025-10-11T04:26:24.057+0000 7fc2dab5f640 -1 AuthRegistry(0x7fc2d4067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 11 04:26:24 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-crash-compute-0[82347]: 2025-10-11T04:26:24.059+0000 7fc2dab5f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 11 04:26:24 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-crash-compute-0[82347]: 2025-10-11T04:26:24.059+0000 7fc2dab5f640 -1 AuthRegistry(0x7fc2dab5e000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 11 04:26:24 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-crash-compute-0[82347]: 2025-10-11T04:26:24.059+0000 7fc2d88d4640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 11 04:26:24 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-crash-compute-0[82347]: 2025-10-11T04:26:24.060+0000 7fc2dab5f640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct 11 04:26:24 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-crash-compute-0[82347]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct 11 04:26:24 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-crash-compute-0[82347]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct 11 04:26:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Oct 11 04:26:24 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1992341715' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 11 04:26:24 compute-0 podman[82525]: 2025-10-11 04:26:24.328832367 +0000 UTC m=+0.065762918 container create beebd2fb28d992635c052c55d528378c9b7b62964dd25d2870ae3f700e23437c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:26:24 compute-0 systemd[1]: Started libpod-conmon-beebd2fb28d992635c052c55d528378c9b7b62964dd25d2870ae3f700e23437c.scope.
Oct 11 04:26:24 compute-0 podman[82525]: 2025-10-11 04:26:24.298394809 +0000 UTC m=+0.035325410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:24 compute-0 podman[82525]: 2025-10-11 04:26:24.433569623 +0000 UTC m=+0.170500224 container init beebd2fb28d992635c052c55d528378c9b7b62964dd25d2870ae3f700e23437c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:24 compute-0 podman[82525]: 2025-10-11 04:26:24.444973957 +0000 UTC m=+0.181904498 container start beebd2fb28d992635c052c55d528378c9b7b62964dd25d2870ae3f700e23437c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct 11 04:26:24 compute-0 podman[82525]: 2025-10-11 04:26:24.449004727 +0000 UTC m=+0.185935268 container attach beebd2fb28d992635c052c55d528378c9b7b62964dd25d2870ae3f700e23437c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:24 compute-0 systemd[1]: libpod-beebd2fb28d992635c052c55d528378c9b7b62964dd25d2870ae3f700e23437c.scope: Deactivated successfully.
Oct 11 04:26:24 compute-0 vigilant_blackburn[82541]: 167 167
Oct 11 04:26:24 compute-0 conmon[82541]: conmon beebd2fb28d992635c05 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-beebd2fb28d992635c052c55d528378c9b7b62964dd25d2870ae3f700e23437c.scope/container/memory.events
Oct 11 04:26:24 compute-0 podman[82525]: 2025-10-11 04:26:24.454249588 +0000 UTC m=+0.191180139 container died beebd2fb28d992635c052c55d528378c9b7b62964dd25d2870ae3f700e23437c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:26:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b99b5400f73ae7339657c2e9c6d35a55dde622e80fd3c33d72371e09add8c14-merged.mount: Deactivated successfully.
Oct 11 04:26:24 compute-0 podman[82525]: 2025-10-11 04:26:24.504083748 +0000 UTC m=+0.241014289 container remove beebd2fb28d992635c052c55d528378c9b7b62964dd25d2870ae3f700e23437c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:26:24 compute-0 systemd[1]: libpod-conmon-beebd2fb28d992635c052c55d528378c9b7b62964dd25d2870ae3f700e23437c.scope: Deactivated successfully.
Oct 11 04:26:24 compute-0 systemd[1]: Reloading.
Oct 11 04:26:24 compute-0 systemd-rc-local-generator[82586]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:26:24 compute-0 systemd-sysv-generator[82591]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:26:24 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:24 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jmfijt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 11 04:26:24 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jmfijt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 11 04:26:24 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 04:26:24 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:24 compute-0 ceph-mon[74243]: Deploying daemon mgr.compute-0.jmfijt on compute-0
Oct 11 04:26:24 compute-0 ceph-mon[74243]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:24 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1992341715' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 11 04:26:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct 11 04:26:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:26:24 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1992341715' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 11 04:26:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct 11 04:26:24 compute-0 jolly_turing[82329]: set require_min_compat_client to mimic
Oct 11 04:26:24 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct 11 04:26:24 compute-0 podman[82291]: 2025-10-11 04:26:24.791409998 +0000 UTC m=+1.469627783 container died 31d7e726ba8cb71a444998b38802b55902d7f9c23948432ef0cc90478b9dbeab (image=quay.io/ceph/ceph:v18, name=jolly_turing, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:26:24 compute-0 systemd[1]: libpod-31d7e726ba8cb71a444998b38802b55902d7f9c23948432ef0cc90478b9dbeab.scope: Deactivated successfully.
Oct 11 04:26:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-72122a0da7ba9ed372327675d18da47efa6a22efb097ed3dd1fae63f4e0557d4-merged.mount: Deactivated successfully.
Oct 11 04:26:24 compute-0 podman[82291]: 2025-10-11 04:26:24.923395153 +0000 UTC m=+1.601612928 container remove 31d7e726ba8cb71a444998b38802b55902d7f9c23948432ef0cc90478b9dbeab (image=quay.io/ceph/ceph:v18, name=jolly_turing, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:24 compute-0 systemd[1]: libpod-conmon-31d7e726ba8cb71a444998b38802b55902d7f9c23948432ef0cc90478b9dbeab.scope: Deactivated successfully.
Oct 11 04:26:24 compute-0 systemd[1]: Reloading.
Oct 11 04:26:24 compute-0 sudo[82262]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:25 compute-0 systemd-sysv-generator[82635]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:26:25 compute-0 systemd-rc-local-generator[82631]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:26:25 compute-0 systemd[1]: Starting Ceph mgr.compute-0.jmfijt for 166d0489-2ae7-59eb-961c-c1b5cda4b45a...
Oct 11 04:26:25 compute-0 sudo[82697]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzpfdwgezdjdclgigtjlloqadbhzytdo ; /usr/bin/python3'
Oct 11 04:26:25 compute-0 sudo[82697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:26:25 compute-0 python3[82700]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:26:25 compute-0 podman[82724]: 2025-10-11 04:26:25.546794806 +0000 UTC m=+0.059616414 container create 0d1befc3c6aae10b572bdeb2f857cf66e9eb16799b7dadb6601afbc37c7c4929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-jmfijt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:26:25 compute-0 podman[82737]: 2025-10-11 04:26:25.580406653 +0000 UTC m=+0.041451993 container create 0c2832d2c047faca5f12ff849054d1325f6e08f3327fd803db5259a53aad56c9 (image=quay.io/ceph/ceph:v18, name=wizardly_germain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05be215f95a34dfae846cdf4b31da43a948d8d956ea1ef169577c70c322561b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05be215f95a34dfae846cdf4b31da43a948d8d956ea1ef169577c70c322561b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05be215f95a34dfae846cdf4b31da43a948d8d956ea1ef169577c70c322561b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05be215f95a34dfae846cdf4b31da43a948d8d956ea1ef169577c70c322561b/merged/var/lib/ceph/mgr/ceph-compute-0.jmfijt supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:25 compute-0 podman[82724]: 2025-10-11 04:26:25.60399214 +0000 UTC m=+0.116813768 container init 0d1befc3c6aae10b572bdeb2f857cf66e9eb16799b7dadb6601afbc37c7c4929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-jmfijt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:25 compute-0 systemd[1]: Started libpod-conmon-0c2832d2c047faca5f12ff849054d1325f6e08f3327fd803db5259a53aad56c9.scope.
Oct 11 04:26:25 compute-0 podman[82724]: 2025-10-11 04:26:25.614756318 +0000 UTC m=+0.127577926 container start 0d1befc3c6aae10b572bdeb2f857cf66e9eb16799b7dadb6601afbc37c7c4929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-jmfijt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:26:25 compute-0 podman[82724]: 2025-10-11 04:26:25.521441795 +0000 UTC m=+0.034263453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:25 compute-0 bash[82724]: 0d1befc3c6aae10b572bdeb2f857cf66e9eb16799b7dadb6601afbc37c7c4929
Oct 11 04:26:25 compute-0 systemd[1]: Started Ceph mgr.compute-0.jmfijt for 166d0489-2ae7-59eb-961c-c1b5cda4b45a.
Oct 11 04:26:25 compute-0 ceph-mgr[82759]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 04:26:25 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:25 compute-0 ceph-mgr[82759]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 11 04:26:25 compute-0 ceph-mgr[82759]: pidfile_write: ignore empty --pid-file
Oct 11 04:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c50c88531f46eea6614a341d3c25fc386f54754271cd2d018a43308dcabfda7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c50c88531f46eea6614a341d3c25fc386f54754271cd2d018a43308dcabfda7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c50c88531f46eea6614a341d3c25fc386f54754271cd2d018a43308dcabfda7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:25 compute-0 podman[82737]: 2025-10-11 04:26:25.567216145 +0000 UTC m=+0.028261505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:26:25 compute-0 podman[82737]: 2025-10-11 04:26:25.679275143 +0000 UTC m=+0.140320503 container init 0c2832d2c047faca5f12ff849054d1325f6e08f3327fd803db5259a53aad56c9 (image=quay.io/ceph/ceph:v18, name=wizardly_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:25 compute-0 sudo[82448]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:26:25 compute-0 podman[82737]: 2025-10-11 04:26:25.690596625 +0000 UTC m=+0.151642005 container start 0c2832d2c047faca5f12ff849054d1325f6e08f3327fd803db5259a53aad56c9 (image=quay.io/ceph/ceph:v18, name=wizardly_germain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:25 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:25 compute-0 podman[82737]: 2025-10-11 04:26:25.694517302 +0000 UTC m=+0.155562672 container attach 0c2832d2c047faca5f12ff849054d1325f6e08f3327fd803db5259a53aad56c9 (image=quay.io/ceph/ceph:v18, name=wizardly_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:26:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:26:25 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 11 04:26:25 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:25 compute-0 ceph-mgr[74542]: [progress INFO root] complete: finished ev fcdbc63a-e202-4fe5-b084-7dac6e2c8547 (Updating mgr deployment (+1 -> 2))
Oct 11 04:26:25 compute-0 ceph-mgr[74542]: [progress INFO root] Completed event fcdbc63a-e202-4fe5-b084-7dac6e2c8547 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Oct 11 04:26:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 11 04:26:25 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:25 compute-0 ceph-mgr[82759]: mgr[py] Loading python module 'alerts'
Oct 11 04:26:25 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1992341715' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 11 04:26:25 compute-0 ceph-mon[74243]: osdmap e3: 0 total, 0 up, 0 in
Oct 11 04:26:25 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:25 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:25 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:25 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:25 compute-0 sudo[82787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:25 compute-0 sudo[82787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:25 compute-0 sudo[82787]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:25 compute-0 sudo[82812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:26:25 compute-0 sudo[82812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:25 compute-0 sudo[82812]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:25 compute-0 sudo[82837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:25 compute-0 sudo[82837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:25 compute-0 sudo[82837]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:25 compute-0 sudo[82862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:25 compute-0 sudo[82862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:25 compute-0 sudo[82862]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:26 compute-0 sudo[82887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:26 compute-0 sudo[82887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:26 compute-0 sudo[82887]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:26 compute-0 ceph-mgr[82759]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 11 04:26:26 compute-0 ceph-mgr[82759]: mgr[py] Loading python module 'balancer'
Oct 11 04:26:26 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-jmfijt[82752]: 2025-10-11T04:26:26.068+0000 7fb229bf4140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 11 04:26:26 compute-0 sudo[82931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 04:26:26 compute-0 sudo[82931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:26 compute-0 ceph-mgr[74542]: [progress INFO root] Writing back 2 completed events
Oct 11 04:26:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 11 04:26:26 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:26:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:26:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:26:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:26:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:26:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:26:26 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:26:26 compute-0 sudo[82971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:26 compute-0 sudo[82971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:26 compute-0 sudo[82971]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:26 compute-0 ceph-mgr[82759]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 11 04:26:26 compute-0 ceph-mgr[82759]: mgr[py] Loading python module 'cephadm'
Oct 11 04:26:26 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-jmfijt[82752]: 2025-10-11T04:26:26.303+0000 7fb229bf4140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 11 04:26:26 compute-0 sudo[83015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:26 compute-0 sudo[83015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:26 compute-0 sudo[83015]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:26 compute-0 sudo[83053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:26 compute-0 sudo[83053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:26 compute-0 sudo[83053]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:26 compute-0 sudo[83093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 11 04:26:26 compute-0 sudo[83093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:26 compute-0 podman[83127]: 2025-10-11 04:26:26.584628992 +0000 UTC m=+0.057396399 container exec 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:26 compute-0 podman[83127]: 2025-10-11 04:26:26.699662745 +0000 UTC m=+0.172430132 container exec_died 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:26 compute-0 sudo[83093]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 11 04:26:26 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 11 04:26:26 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 11 04:26:26 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 11 04:26:26 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:26 compute-0 ceph-mgr[74542]: [cephadm INFO root] Added host compute-0
Oct 11 04:26:26 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 11 04:26:26 compute-0 ceph-mgr[74542]: [cephadm INFO root] Saving service mon spec with placement compute-0
Oct 11 04:26:26 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Oct 11 04:26:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 11 04:26:26 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:26 compute-0 ceph-mgr[74542]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Oct 11 04:26:26 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Oct 11 04:26:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 11 04:26:26 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:26 compute-0 ceph-mgr[74542]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct 11 04:26:26 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct 11 04:26:26 compute-0 ceph-mgr[74542]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Oct 11 04:26:26 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Oct 11 04:26:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Oct 11 04:26:26 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:26 compute-0 wizardly_germain[82758]: Added host 'compute-0' with addr '192.168.122.100'
Oct 11 04:26:26 compute-0 wizardly_germain[82758]: Scheduled mon update...
Oct 11 04:26:26 compute-0 wizardly_germain[82758]: Scheduled mgr update...
Oct 11 04:26:26 compute-0 wizardly_germain[82758]: Scheduled osd.default_drive_group update...
Oct 11 04:26:26 compute-0 systemd[1]: libpod-0c2832d2c047faca5f12ff849054d1325f6e08f3327fd803db5259a53aad56c9.scope: Deactivated successfully.
Oct 11 04:26:26 compute-0 podman[82737]: 2025-10-11 04:26:26.875765797 +0000 UTC m=+1.336811147 container died 0c2832d2c047faca5f12ff849054d1325f6e08f3327fd803db5259a53aad56c9 (image=quay.io/ceph/ceph:v18, name=wizardly_germain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:26:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c50c88531f46eea6614a341d3c25fc386f54754271cd2d018a43308dcabfda7-merged.mount: Deactivated successfully.
Oct 11 04:26:26 compute-0 podman[82737]: 2025-10-11 04:26:26.945934583 +0000 UTC m=+1.406979933 container remove 0c2832d2c047faca5f12ff849054d1325f6e08f3327fd803db5259a53aad56c9 (image=quay.io/ceph/ceph:v18, name=wizardly_germain, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:26 compute-0 systemd[1]: libpod-conmon-0c2832d2c047faca5f12ff849054d1325f6e08f3327fd803db5259a53aad56c9.scope: Deactivated successfully.
Oct 11 04:26:26 compute-0 sudo[82697]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:27 compute-0 sudo[82931]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:27 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:26:27 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:27 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:26:27 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:27 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:26:27 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:27 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:26:27 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:27 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:26:27 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:27 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:26:27 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:26:27 compute-0 ceph-mon[74243]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:27 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:27 compute-0 ceph-mon[74243]: from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:26:27 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:27 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:27 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:27 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:27 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:27 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:27 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:27 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:27 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:27 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:27 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:27 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:27 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:26:27 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:27 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev a32598c1-5b0b-4534-92fc-673c96123061 does not exist
Oct 11 04:26:27 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 11 04:26:27 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:27 compute-0 ceph-mgr[74542]: [progress INFO root] update: starting ev e69ef396-2501-4059-b0fd-d20436f508fc (Updating mgr deployment (-1 -> 1))
Oct 11 04:26:27 compute-0 ceph-mgr[74542]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.jmfijt from compute-0 -- ports [8765]
Oct 11 04:26:27 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.jmfijt from compute-0 -- ports [8765]
Oct 11 04:26:27 compute-0 sudo[83244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:27 compute-0 sudo[83244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:27 compute-0 sudo[83289]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onykrdftetonjmyvymmrvdwimioojkpg ; /usr/bin/python3'
Oct 11 04:26:27 compute-0 sudo[83244]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:27 compute-0 sudo[83289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:26:27 compute-0 sudo[83294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:27 compute-0 sudo[83294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:27 compute-0 sudo[83294]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:27 compute-0 python3[83293]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:26:27 compute-0 sudo[83319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:27 compute-0 sudo[83319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:27 compute-0 sudo[83319]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:27 compute-0 podman[83343]: 2025-10-11 04:26:27.483382028 +0000 UTC m=+0.069497101 container create d3c7f6c6d71fa9a8a08e7965ef0e3a3e3d273039ea914734d109431dbce7f19b (image=quay.io/ceph/ceph:v18, name=amazing_panini, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:26:27 compute-0 sudo[83352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 rm-daemon --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --name mgr.compute-0.jmfijt --force --tcp-ports 8765
Oct 11 04:26:27 compute-0 sudo[83352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:27 compute-0 systemd[1]: Started libpod-conmon-d3c7f6c6d71fa9a8a08e7965ef0e3a3e3d273039ea914734d109431dbce7f19b.scope.
Oct 11 04:26:27 compute-0 podman[83343]: 2025-10-11 04:26:27.454321325 +0000 UTC m=+0.040436458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:26:27 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851da8ceba86f93d0a0523beb3a1e97d592bd908d9fb14a0c9d24843d2e63d32/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851da8ceba86f93d0a0523beb3a1e97d592bd908d9fb14a0c9d24843d2e63d32/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851da8ceba86f93d0a0523beb3a1e97d592bd908d9fb14a0c9d24843d2e63d32/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:27 compute-0 podman[83343]: 2025-10-11 04:26:27.595035687 +0000 UTC m=+0.181150810 container init d3c7f6c6d71fa9a8a08e7965ef0e3a3e3d273039ea914734d109431dbce7f19b (image=quay.io/ceph/ceph:v18, name=amazing_panini, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:27 compute-0 podman[83343]: 2025-10-11 04:26:27.603523738 +0000 UTC m=+0.189638821 container start d3c7f6c6d71fa9a8a08e7965ef0e3a3e3d273039ea914734d109431dbce7f19b (image=quay.io/ceph/ceph:v18, name=amazing_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 04:26:27 compute-0 podman[83343]: 2025-10-11 04:26:27.607969908 +0000 UTC m=+0.194084991 container attach d3c7f6c6d71fa9a8a08e7965ef0e3a3e3d273039ea914734d109431dbce7f19b (image=quay.io/ceph/ceph:v18, name=amazing_panini, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:27 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.jmfijt for 166d0489-2ae7-59eb-961c-c1b5cda4b45a...
Oct 11 04:26:28 compute-0 podman[83471]: 2025-10-11 04:26:28.001226685 +0000 UTC m=+0.076967157 container died 0d1befc3c6aae10b572bdeb2f857cf66e9eb16799b7dadb6601afbc37c7c4929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-jmfijt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a05be215f95a34dfae846cdf4b31da43a948d8d956ea1ef169577c70c322561b-merged.mount: Deactivated successfully.
Oct 11 04:26:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:28 compute-0 podman[83471]: 2025-10-11 04:26:28.044262966 +0000 UTC m=+0.120003418 container remove 0d1befc3c6aae10b572bdeb2f857cf66e9eb16799b7dadb6601afbc37c7c4929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-jmfijt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:28 compute-0 bash[83471]: ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-jmfijt
Oct 11 04:26:28 compute-0 systemd[1]: ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a@mgr.compute-0.jmfijt.service: Main process exited, code=exited, status=143/n/a
Oct 11 04:26:28 compute-0 ceph-mon[74243]: Added host compute-0
Oct 11 04:26:28 compute-0 ceph-mon[74243]: Saving service mon spec with placement compute-0
Oct 11 04:26:28 compute-0 ceph-mon[74243]: Saving service mgr spec with placement compute-0
Oct 11 04:26:28 compute-0 ceph-mon[74243]: Marking host: compute-0 for OSDSpec preview refresh.
Oct 11 04:26:28 compute-0 ceph-mon[74243]: Saving service osd.default_drive_group spec with placement compute-0
Oct 11 04:26:28 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:26:28 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:28 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:28 compute-0 ceph-mon[74243]: Removing daemon mgr.compute-0.jmfijt from compute-0 -- ports [8765]
Oct 11 04:26:28 compute-0 systemd[1]: ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a@mgr.compute-0.jmfijt.service: Failed with result 'exit-code'.
Oct 11 04:26:28 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.jmfijt for 166d0489-2ae7-59eb-961c-c1b5cda4b45a.
Oct 11 04:26:28 compute-0 systemd[1]: ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a@mgr.compute-0.jmfijt.service: Consumed 3.269s CPU time.
Oct 11 04:26:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 11 04:26:28 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1339979705' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 04:26:28 compute-0 amazing_panini[83387]: 
Oct 11 04:26:28 compute-0 amazing_panini[83387]: {"fsid":"166d0489-2ae7-59eb-961c-c1b5cda4b45a","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":78,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-11T04:25:06.859773+0000","services":{}},"progress_events":{"e69ef396-2501-4059-b0fd-d20436f508fc":{"message":"Updating mgr deployment (-1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Oct 11 04:26:28 compute-0 podman[83343]: 2025-10-11 04:26:28.264620359 +0000 UTC m=+0.850735442 container died d3c7f6c6d71fa9a8a08e7965ef0e3a3e3d273039ea914734d109431dbce7f19b (image=quay.io/ceph/ceph:v18, name=amazing_panini, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:26:28 compute-0 systemd[1]: libpod-d3c7f6c6d71fa9a8a08e7965ef0e3a3e3d273039ea914734d109431dbce7f19b.scope: Deactivated successfully.
Oct 11 04:26:28 compute-0 systemd[1]: Reloading.
Oct 11 04:26:28 compute-0 systemd-rc-local-generator[83588]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:26:28 compute-0 systemd-sysv-generator[83591]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:26:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-851da8ceba86f93d0a0523beb3a1e97d592bd908d9fb14a0c9d24843d2e63d32-merged.mount: Deactivated successfully.
Oct 11 04:26:28 compute-0 sudo[83352]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:28 compute-0 podman[83343]: 2025-10-11 04:26:28.554300318 +0000 UTC m=+1.140415391 container remove d3c7f6c6d71fa9a8a08e7965ef0e3a3e3d273039ea914734d109431dbce7f19b (image=quay.io/ceph/ceph:v18, name=amazing_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:28 compute-0 ceph-mgr[74542]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.jmfijt
Oct 11 04:26:28 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.jmfijt
Oct 11 04:26:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.jmfijt"} v 0) v1
Oct 11 04:26:28 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.jmfijt"}]: dispatch
Oct 11 04:26:28 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.jmfijt"}]': finished
Oct 11 04:26:28 compute-0 systemd[1]: libpod-conmon-d3c7f6c6d71fa9a8a08e7965ef0e3a3e3d273039ea914734d109431dbce7f19b.scope: Deactivated successfully.
Oct 11 04:26:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 11 04:26:28 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:28 compute-0 ceph-mgr[74542]: [progress INFO root] complete: finished ev e69ef396-2501-4059-b0fd-d20436f508fc (Updating mgr deployment (-1 -> 1))
Oct 11 04:26:28 compute-0 ceph-mgr[74542]: [progress INFO root] Completed event e69ef396-2501-4059-b0fd-d20436f508fc (Updating mgr deployment (-1 -> 1)) in 1 seconds
Oct 11 04:26:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 11 04:26:28 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:28 compute-0 sudo[83289]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:28 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 141c3da1-79cb-4b46-974c-007a2f5d8a37 does not exist
Oct 11 04:26:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:26:28 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:26:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:26:28 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:26:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:26:28 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:28 compute-0 sudo[83599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:28 compute-0 sudo[83599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:28 compute-0 sudo[83599]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:28 compute-0 sudo[83624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:28 compute-0 sudo[83624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:28 compute-0 sudo[83624]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:28 compute-0 sudo[83649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:28 compute-0 sudo[83649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:28 compute-0 sudo[83649]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:28 compute-0 sudo[83674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:26:28 compute-0 sudo[83674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:29 compute-0 ceph-mon[74243]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:29 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1339979705' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 04:26:29 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.jmfijt"}]: dispatch
Oct 11 04:26:29 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.jmfijt"}]': finished
Oct 11 04:26:29 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:29 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:29 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:26:29 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:26:29 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:29 compute-0 podman[83741]: 2025-10-11 04:26:29.292246472 +0000 UTC m=+0.049560284 container create fd8bd81dad8d9f7b3fe19620ed8231b2d57169c8c1a12c97f528aba26852b44a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:29 compute-0 systemd[1]: Started libpod-conmon-fd8bd81dad8d9f7b3fe19620ed8231b2d57169c8c1a12c97f528aba26852b44a.scope.
Oct 11 04:26:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:29 compute-0 podman[83741]: 2025-10-11 04:26:29.277473595 +0000 UTC m=+0.034787437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:29 compute-0 podman[83741]: 2025-10-11 04:26:29.379388671 +0000 UTC m=+0.136702573 container init fd8bd81dad8d9f7b3fe19620ed8231b2d57169c8c1a12c97f528aba26852b44a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:29 compute-0 podman[83741]: 2025-10-11 04:26:29.387654306 +0000 UTC m=+0.144968148 container start fd8bd81dad8d9f7b3fe19620ed8231b2d57169c8c1a12c97f528aba26852b44a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:26:29 compute-0 podman[83741]: 2025-10-11 04:26:29.392346163 +0000 UTC m=+0.149660055 container attach fd8bd81dad8d9f7b3fe19620ed8231b2d57169c8c1a12c97f528aba26852b44a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_galois, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:26:29 compute-0 hardcore_galois[83758]: 167 167
Oct 11 04:26:29 compute-0 systemd[1]: libpod-fd8bd81dad8d9f7b3fe19620ed8231b2d57169c8c1a12c97f528aba26852b44a.scope: Deactivated successfully.
Oct 11 04:26:29 compute-0 podman[83741]: 2025-10-11 04:26:29.397754608 +0000 UTC m=+0.155068460 container died fd8bd81dad8d9f7b3fe19620ed8231b2d57169c8c1a12c97f528aba26852b44a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:26:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-99215b846fc1c94ce5cf81388c777aa1de4bcd16363357ad6aae6428e7d0c3bb-merged.mount: Deactivated successfully.
Oct 11 04:26:29 compute-0 podman[83741]: 2025-10-11 04:26:29.449975637 +0000 UTC m=+0.207289499 container remove fd8bd81dad8d9f7b3fe19620ed8231b2d57169c8c1a12c97f528aba26852b44a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:26:29 compute-0 systemd[1]: libpod-conmon-fd8bd81dad8d9f7b3fe19620ed8231b2d57169c8c1a12c97f528aba26852b44a.scope: Deactivated successfully.
Oct 11 04:26:29 compute-0 podman[83781]: 2025-10-11 04:26:29.672614198 +0000 UTC m=+0.073236064 container create c9ecd342e84add5ae326f3e15e2f627b794e13e28717c90f713718c5393f7d70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kalam, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:26:29 compute-0 systemd[1]: Started libpod-conmon-c9ecd342e84add5ae326f3e15e2f627b794e13e28717c90f713718c5393f7d70.scope.
Oct 11 04:26:29 compute-0 podman[83781]: 2025-10-11 04:26:29.640611061 +0000 UTC m=+0.041232987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff49fd5c19a0ff0eca2e10bd3f6afc6119c566bb22eccf069359aec9137a62a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff49fd5c19a0ff0eca2e10bd3f6afc6119c566bb22eccf069359aec9137a62a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff49fd5c19a0ff0eca2e10bd3f6afc6119c566bb22eccf069359aec9137a62a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff49fd5c19a0ff0eca2e10bd3f6afc6119c566bb22eccf069359aec9137a62a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff49fd5c19a0ff0eca2e10bd3f6afc6119c566bb22eccf069359aec9137a62a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:29 compute-0 podman[83781]: 2025-10-11 04:26:29.774555545 +0000 UTC m=+0.175177391 container init c9ecd342e84add5ae326f3e15e2f627b794e13e28717c90f713718c5393f7d70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:29 compute-0 podman[83781]: 2025-10-11 04:26:29.78560762 +0000 UTC m=+0.186229466 container start c9ecd342e84add5ae326f3e15e2f627b794e13e28717c90f713718c5393f7d70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 11 04:26:29 compute-0 podman[83781]: 2025-10-11 04:26:29.789006264 +0000 UTC m=+0.189628120 container attach c9ecd342e84add5ae326f3e15e2f627b794e13e28717c90f713718c5393f7d70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kalam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:26:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:30 compute-0 ceph-mon[74243]: Removing key for mgr.compute-0.jmfijt
Oct 11 04:26:30 compute-0 sad_kalam[83797]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:26:30 compute-0 sad_kalam[83797]: --> relative data size: 1.0
Oct 11 04:26:30 compute-0 sad_kalam[83797]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 11 04:26:31 compute-0 sad_kalam[83797]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 29ed28f5-c2da-4c6f-bb64-dc7391248f4a
Oct 11 04:26:31 compute-0 ceph-mgr[74542]: [progress INFO root] Writing back 3 completed events
Oct 11 04:26:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 11 04:26:31 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:31 compute-0 ceph-mon[74243]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:31 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a"} v 0) v1
Oct 11 04:26:31 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1153897764' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a"}]: dispatch
Oct 11 04:26:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct 11 04:26:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:26:31 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1153897764' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a"}]': finished
Oct 11 04:26:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct 11 04:26:31 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct 11 04:26:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:26:31 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:26:31 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 04:26:31 compute-0 sad_kalam[83797]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 11 04:26:31 compute-0 lvm[83858]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 11 04:26:31 compute-0 lvm[83858]: VG ceph_vg0 finished
Oct 11 04:26:31 compute-0 sad_kalam[83797]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct 11 04:26:31 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct 11 04:26:31 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 11 04:26:31 compute-0 sad_kalam[83797]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 11 04:26:31 compute-0 sad_kalam[83797]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct 11 04:26:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:32 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 11 04:26:32 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/324298844' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 11 04:26:32 compute-0 sad_kalam[83797]:  stderr: got monmap epoch 1
Oct 11 04:26:32 compute-0 sad_kalam[83797]: --> Creating keyring file for osd.0
Oct 11 04:26:32 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct 11 04:26:32 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct 11 04:26:32 compute-0 sad_kalam[83797]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 29ed28f5-c2da-4c6f-bb64-dc7391248f4a --setuser ceph --setgroup ceph
Oct 11 04:26:32 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1153897764' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a"}]: dispatch
Oct 11 04:26:32 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1153897764' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a"}]': finished
Oct 11 04:26:32 compute-0 ceph-mon[74243]: osdmap e4: 1 total, 0 up, 1 in
Oct 11 04:26:32 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:26:32 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/324298844' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 11 04:26:32 compute-0 ceph-mon[74243]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 11 04:26:32 compute-0 ceph-mon[74243]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 11 04:26:33 compute-0 ceph-mon[74243]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:33 compute-0 ceph-mon[74243]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 11 04:26:33 compute-0 ceph-mon[74243]: Cluster is now healthy
Oct 11 04:26:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:34 compute-0 sad_kalam[83797]:  stderr: 2025-10-11T04:26:32.220+0000 7f0687f98740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 04:26:34 compute-0 sad_kalam[83797]:  stderr: 2025-10-11T04:26:32.220+0000 7f0687f98740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 04:26:34 compute-0 sad_kalam[83797]:  stderr: 2025-10-11T04:26:32.220+0000 7f0687f98740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 04:26:34 compute-0 sad_kalam[83797]:  stderr: 2025-10-11T04:26:32.221+0000 7f0687f98740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct 11 04:26:34 compute-0 sad_kalam[83797]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct 11 04:26:34 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 11 04:26:34 compute-0 sad_kalam[83797]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct 11 04:26:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:34 compute-0 sad_kalam[83797]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 11 04:26:34 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct 11 04:26:34 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 11 04:26:34 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 11 04:26:34 compute-0 sad_kalam[83797]: --> ceph-volume lvm activate successful for osd ID: 0
Oct 11 04:26:34 compute-0 sad_kalam[83797]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct 11 04:26:34 compute-0 sad_kalam[83797]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 11 04:26:34 compute-0 sad_kalam[83797]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 235849fc-4683-43e5-9b6a-a0d6f8d1cee8
Oct 11 04:26:35 compute-0 ceph-mon[74243]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8"} v 0) v1
Oct 11 04:26:35 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/358235873' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8"}]: dispatch
Oct 11 04:26:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct 11 04:26:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:26:35 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/358235873' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8"}]': finished
Oct 11 04:26:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct 11 04:26:35 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct 11 04:26:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:26:35 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:26:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:26:35 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 04:26:35 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:26:35 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:26:35 compute-0 lvm[84793]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 11 04:26:35 compute-0 lvm[84793]: VG ceph_vg1 finished
Oct 11 04:26:35 compute-0 sad_kalam[83797]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 11 04:26:35 compute-0 sad_kalam[83797]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Oct 11 04:26:35 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Oct 11 04:26:35 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 11 04:26:35 compute-0 sad_kalam[83797]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 11 04:26:35 compute-0 sad_kalam[83797]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Oct 11 04:26:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 11 04:26:35 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2027380580' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 11 04:26:35 compute-0 sad_kalam[83797]:  stderr: got monmap epoch 1
Oct 11 04:26:35 compute-0 sad_kalam[83797]: --> Creating keyring file for osd.1
Oct 11 04:26:35 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Oct 11 04:26:35 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Oct 11 04:26:35 compute-0 sad_kalam[83797]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 235849fc-4683-43e5-9b6a-a0d6f8d1cee8 --setuser ceph --setgroup ceph
Oct 11 04:26:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:36 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/358235873' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8"}]: dispatch
Oct 11 04:26:36 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/358235873' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8"}]': finished
Oct 11 04:26:36 compute-0 ceph-mon[74243]: osdmap e5: 2 total, 0 up, 2 in
Oct 11 04:26:36 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:26:36 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:26:36 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2027380580' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 11 04:26:37 compute-0 ceph-mon[74243]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:38 compute-0 sad_kalam[83797]:  stderr: 2025-10-11T04:26:35.900+0000 7f1c916e7740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 04:26:38 compute-0 sad_kalam[83797]:  stderr: 2025-10-11T04:26:35.900+0000 7f1c916e7740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 04:26:38 compute-0 sad_kalam[83797]:  stderr: 2025-10-11T04:26:35.900+0000 7f1c916e7740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 04:26:38 compute-0 sad_kalam[83797]:  stderr: 2025-10-11T04:26:35.900+0000 7f1c916e7740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Oct 11 04:26:38 compute-0 sad_kalam[83797]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Oct 11 04:26:38 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 11 04:26:38 compute-0 sad_kalam[83797]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct 11 04:26:38 compute-0 sad_kalam[83797]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 11 04:26:38 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct 11 04:26:38 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 11 04:26:38 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 11 04:26:38 compute-0 sad_kalam[83797]: --> ceph-volume lvm activate successful for osd ID: 1
Oct 11 04:26:38 compute-0 sad_kalam[83797]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Oct 11 04:26:38 compute-0 sad_kalam[83797]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 11 04:26:38 compute-0 sad_kalam[83797]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 30486846-2cc7-4787-8e36-0fb42ef328c5
Oct 11 04:26:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5"} v 0) v1
Oct 11 04:26:38 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3937759507' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5"}]: dispatch
Oct 11 04:26:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct 11 04:26:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:26:38 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3937759507' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5"}]': finished
Oct 11 04:26:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Oct 11 04:26:38 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Oct 11 04:26:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:26:38 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:26:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:26:38 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:26:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:26:38 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:26:38 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 04:26:38 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:26:38 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:26:38 compute-0 lvm[85728]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 04:26:38 compute-0 lvm[85728]: VG ceph_vg2 finished
Oct 11 04:26:38 compute-0 sad_kalam[83797]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 11 04:26:38 compute-0 sad_kalam[83797]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Oct 11 04:26:38 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Oct 11 04:26:38 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 11 04:26:38 compute-0 sad_kalam[83797]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 11 04:26:38 compute-0 sad_kalam[83797]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Oct 11 04:26:39 compute-0 ceph-mon[74243]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:39 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3937759507' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5"}]: dispatch
Oct 11 04:26:39 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3937759507' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5"}]': finished
Oct 11 04:26:39 compute-0 ceph-mon[74243]: osdmap e6: 3 total, 0 up, 3 in
Oct 11 04:26:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:26:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:26:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:26:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 11 04:26:39 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4039152933' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 11 04:26:39 compute-0 sad_kalam[83797]:  stderr: got monmap epoch 1
Oct 11 04:26:39 compute-0 sad_kalam[83797]: --> Creating keyring file for osd.2
Oct 11 04:26:39 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Oct 11 04:26:39 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Oct 11 04:26:39 compute-0 sad_kalam[83797]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 30486846-2cc7-4787-8e36-0fb42ef328c5 --setuser ceph --setgroup ceph
Oct 11 04:26:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:40 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4039152933' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 11 04:26:41 compute-0 ceph-mon[74243]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:41 compute-0 sad_kalam[83797]:  stderr: 2025-10-11T04:26:39.429+0000 7f1d6534e740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 04:26:41 compute-0 sad_kalam[83797]:  stderr: 2025-10-11T04:26:39.429+0000 7f1d6534e740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 04:26:41 compute-0 sad_kalam[83797]:  stderr: 2025-10-11T04:26:39.430+0000 7f1d6534e740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 04:26:41 compute-0 sad_kalam[83797]:  stderr: 2025-10-11T04:26:39.430+0000 7f1d6534e740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Oct 11 04:26:41 compute-0 sad_kalam[83797]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Oct 11 04:26:41 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 11 04:26:41 compute-0 sad_kalam[83797]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct 11 04:26:41 compute-0 sad_kalam[83797]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 11 04:26:41 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct 11 04:26:41 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 11 04:26:41 compute-0 sad_kalam[83797]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 11 04:26:41 compute-0 sad_kalam[83797]: --> ceph-volume lvm activate successful for osd ID: 2
Oct 11 04:26:41 compute-0 sad_kalam[83797]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Oct 11 04:26:41 compute-0 systemd[1]: libpod-c9ecd342e84add5ae326f3e15e2f627b794e13e28717c90f713718c5393f7d70.scope: Deactivated successfully.
Oct 11 04:26:41 compute-0 systemd[1]: libpod-c9ecd342e84add5ae326f3e15e2f627b794e13e28717c90f713718c5393f7d70.scope: Consumed 6.261s CPU time.
Oct 11 04:26:41 compute-0 podman[83781]: 2025-10-11 04:26:41.826672341 +0000 UTC m=+12.227294207 container died c9ecd342e84add5ae326f3e15e2f627b794e13e28717c90f713718c5393f7d70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kalam, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:26:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff49fd5c19a0ff0eca2e10bd3f6afc6119c566bb22eccf069359aec9137a62a4-merged.mount: Deactivated successfully.
Oct 11 04:26:41 compute-0 podman[83781]: 2025-10-11 04:26:41.898750048 +0000 UTC m=+12.299371894 container remove c9ecd342e84add5ae326f3e15e2f627b794e13e28717c90f713718c5393f7d70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kalam, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:26:41 compute-0 systemd[1]: libpod-conmon-c9ecd342e84add5ae326f3e15e2f627b794e13e28717c90f713718c5393f7d70.scope: Deactivated successfully.
Oct 11 04:26:41 compute-0 sudo[83674]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:42 compute-0 sudo[86646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:42 compute-0 sudo[86646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:42 compute-0 sudo[86646]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:42 compute-0 sudo[86671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:42 compute-0 sudo[86671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:42 compute-0 sudo[86671]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:42 compute-0 sudo[86696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:42 compute-0 sudo[86696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:42 compute-0 sudo[86696]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:42 compute-0 sudo[86721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:26:42 compute-0 sudo[86721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:42 compute-0 podman[86786]: 2025-10-11 04:26:42.730544541 +0000 UTC m=+0.065067763 container create be8c0aefa5f5d5d722953e2e3e585f5d9272f0774a305265aae0754259fe7cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kapitsa, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 11 04:26:42 compute-0 systemd[1]: Started libpod-conmon-be8c0aefa5f5d5d722953e2e3e585f5d9272f0774a305265aae0754259fe7cd4.scope.
Oct 11 04:26:42 compute-0 podman[86786]: 2025-10-11 04:26:42.704185023 +0000 UTC m=+0.038708335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:42 compute-0 podman[86786]: 2025-10-11 04:26:42.820302098 +0000 UTC m=+0.154825350 container init be8c0aefa5f5d5d722953e2e3e585f5d9272f0774a305265aae0754259fe7cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:26:42 compute-0 podman[86786]: 2025-10-11 04:26:42.83123762 +0000 UTC m=+0.165760872 container start be8c0aefa5f5d5d722953e2e3e585f5d9272f0774a305265aae0754259fe7cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kapitsa, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 04:26:42 compute-0 podman[86786]: 2025-10-11 04:26:42.83522551 +0000 UTC m=+0.169748762 container attach be8c0aefa5f5d5d722953e2e3e585f5d9272f0774a305265aae0754259fe7cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:42 compute-0 affectionate_kapitsa[86802]: 167 167
Oct 11 04:26:42 compute-0 systemd[1]: libpod-be8c0aefa5f5d5d722953e2e3e585f5d9272f0774a305265aae0754259fe7cd4.scope: Deactivated successfully.
Oct 11 04:26:42 compute-0 podman[86786]: 2025-10-11 04:26:42.83843663 +0000 UTC m=+0.172959912 container died be8c0aefa5f5d5d722953e2e3e585f5d9272f0774a305265aae0754259fe7cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:26:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e86a2d32bebc8177494fae754af977e399ffdf10cc3561c99533129d6937fdb-merged.mount: Deactivated successfully.
Oct 11 04:26:42 compute-0 podman[86786]: 2025-10-11 04:26:42.884405486 +0000 UTC m=+0.218928738 container remove be8c0aefa5f5d5d722953e2e3e585f5d9272f0774a305265aae0754259fe7cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kapitsa, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:26:42 compute-0 systemd[1]: libpod-conmon-be8c0aefa5f5d5d722953e2e3e585f5d9272f0774a305265aae0754259fe7cd4.scope: Deactivated successfully.
Oct 11 04:26:43 compute-0 podman[86826]: 2025-10-11 04:26:43.093298132 +0000 UTC m=+0.045994937 container create 23da43eb47e5d720218572e7d4ac7fa7290a29e5f4279de49102e44717ace657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_pike, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:26:43 compute-0 systemd[1]: Started libpod-conmon-23da43eb47e5d720218572e7d4ac7fa7290a29e5f4279de49102e44717ace657.scope.
Oct 11 04:26:43 compute-0 podman[86826]: 2025-10-11 04:26:43.075734164 +0000 UTC m=+0.028430999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:43 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaa750002cbd143c6bad8cfb820ea51ecf57c668d1315abccc235fd1bb41820/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaa750002cbd143c6bad8cfb820ea51ecf57c668d1315abccc235fd1bb41820/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaa750002cbd143c6bad8cfb820ea51ecf57c668d1315abccc235fd1bb41820/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaa750002cbd143c6bad8cfb820ea51ecf57c668d1315abccc235fd1bb41820/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:43 compute-0 podman[86826]: 2025-10-11 04:26:43.193432058 +0000 UTC m=+0.146128893 container init 23da43eb47e5d720218572e7d4ac7fa7290a29e5f4279de49102e44717ace657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_pike, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:26:43 compute-0 podman[86826]: 2025-10-11 04:26:43.212164565 +0000 UTC m=+0.164861400 container start 23da43eb47e5d720218572e7d4ac7fa7290a29e5f4279de49102e44717ace657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_pike, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:26:43 compute-0 podman[86826]: 2025-10-11 04:26:43.216440771 +0000 UTC m=+0.169137586 container attach 23da43eb47e5d720218572e7d4ac7fa7290a29e5f4279de49102e44717ace657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_pike, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:43 compute-0 ceph-mon[74243]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:43 compute-0 heuristic_pike[86843]: {
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:     "0": [
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:         {
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "devices": [
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "/dev/loop3"
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             ],
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "lv_name": "ceph_lv0",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "lv_size": "21470642176",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "name": "ceph_lv0",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "tags": {
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.cluster_name": "ceph",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.crush_device_class": "",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.encrypted": "0",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.osd_id": "0",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.type": "block",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.vdo": "0"
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             },
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "type": "block",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "vg_name": "ceph_vg0"
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:         }
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:     ],
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:     "1": [
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:         {
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "devices": [
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "/dev/loop4"
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             ],
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "lv_name": "ceph_lv1",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "lv_size": "21470642176",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "name": "ceph_lv1",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "tags": {
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.cluster_name": "ceph",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.crush_device_class": "",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.encrypted": "0",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.osd_id": "1",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.type": "block",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.vdo": "0"
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             },
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "type": "block",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "vg_name": "ceph_vg1"
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:         }
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:     ],
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:     "2": [
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:         {
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "devices": [
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "/dev/loop5"
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             ],
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "lv_name": "ceph_lv2",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "lv_size": "21470642176",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "name": "ceph_lv2",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "tags": {
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.cluster_name": "ceph",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.crush_device_class": "",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.encrypted": "0",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.osd_id": "2",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.type": "block",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:                 "ceph.vdo": "0"
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             },
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "type": "block",
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:             "vg_name": "ceph_vg2"
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:         }
Oct 11 04:26:43 compute-0 heuristic_pike[86843]:     ]
Oct 11 04:26:43 compute-0 heuristic_pike[86843]: }
Oct 11 04:26:44 compute-0 systemd[1]: libpod-23da43eb47e5d720218572e7d4ac7fa7290a29e5f4279de49102e44717ace657.scope: Deactivated successfully.
Oct 11 04:26:44 compute-0 podman[86826]: 2025-10-11 04:26:44.016616966 +0000 UTC m=+0.969313821 container died 23da43eb47e5d720218572e7d4ac7fa7290a29e5f4279de49102e44717ace657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_pike, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:26:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-acaa750002cbd143c6bad8cfb820ea51ecf57c668d1315abccc235fd1bb41820-merged.mount: Deactivated successfully.
Oct 11 04:26:44 compute-0 podman[86826]: 2025-10-11 04:26:44.096442105 +0000 UTC m=+1.049138940 container remove 23da43eb47e5d720218572e7d4ac7fa7290a29e5f4279de49102e44717ace657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_pike, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:44 compute-0 systemd[1]: libpod-conmon-23da43eb47e5d720218572e7d4ac7fa7290a29e5f4279de49102e44717ace657.scope: Deactivated successfully.
Oct 11 04:26:44 compute-0 sudo[86721]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Oct 11 04:26:44 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 11 04:26:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:26:44 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:44 compute-0 ceph-mgr[74542]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Oct 11 04:26:44 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Oct 11 04:26:44 compute-0 sudo[86865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:44 compute-0 sudo[86865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:44 compute-0 sudo[86865]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:44 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 11 04:26:44 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:44 compute-0 sudo[86890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:44 compute-0 sudo[86890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:44 compute-0 sudo[86890]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:44 compute-0 sudo[86915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:44 compute-0 sudo[86915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:44 compute-0 sudo[86915]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:44 compute-0 sudo[86940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:26:44 compute-0 sudo[86940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:44 compute-0 podman[87006]: 2025-10-11 04:26:44.909591412 +0000 UTC m=+0.062368565 container create 2e7d5bbb50447336a16cf928b0cc1e0842d6b6f10d8ea027ae08f071f0fde8cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_bouman, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:44 compute-0 systemd[1]: Started libpod-conmon-2e7d5bbb50447336a16cf928b0cc1e0842d6b6f10d8ea027ae08f071f0fde8cf.scope.
Oct 11 04:26:44 compute-0 podman[87006]: 2025-10-11 04:26:44.880765304 +0000 UTC m=+0.033542507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:44 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:45 compute-0 podman[87006]: 2025-10-11 04:26:45.019893251 +0000 UTC m=+0.172670414 container init 2e7d5bbb50447336a16cf928b0cc1e0842d6b6f10d8ea027ae08f071f0fde8cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:26:45 compute-0 podman[87006]: 2025-10-11 04:26:45.030618269 +0000 UTC m=+0.183395392 container start 2e7d5bbb50447336a16cf928b0cc1e0842d6b6f10d8ea027ae08f071f0fde8cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_bouman, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:26:45 compute-0 podman[87006]: 2025-10-11 04:26:45.034497266 +0000 UTC m=+0.187274459 container attach 2e7d5bbb50447336a16cf928b0cc1e0842d6b6f10d8ea027ae08f071f0fde8cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_bouman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:26:45 compute-0 romantic_bouman[87022]: 167 167
Oct 11 04:26:45 compute-0 systemd[1]: libpod-2e7d5bbb50447336a16cf928b0cc1e0842d6b6f10d8ea027ae08f071f0fde8cf.scope: Deactivated successfully.
Oct 11 04:26:45 compute-0 podman[87006]: 2025-10-11 04:26:45.040198768 +0000 UTC m=+0.192975911 container died 2e7d5bbb50447336a16cf928b0cc1e0842d6b6f10d8ea027ae08f071f0fde8cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_bouman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f45f0471c6f5ebb01c4f1561d9fde01bc3baf253f407a61613dcecbb435be6f7-merged.mount: Deactivated successfully.
Oct 11 04:26:45 compute-0 podman[87006]: 2025-10-11 04:26:45.092653815 +0000 UTC m=+0.245430958 container remove 2e7d5bbb50447336a16cf928b0cc1e0842d6b6f10d8ea027ae08f071f0fde8cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 11 04:26:45 compute-0 systemd[1]: libpod-conmon-2e7d5bbb50447336a16cf928b0cc1e0842d6b6f10d8ea027ae08f071f0fde8cf.scope: Deactivated successfully.
Oct 11 04:26:45 compute-0 ceph-mon[74243]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:45 compute-0 ceph-mon[74243]: Deploying daemon osd.0 on compute-0
Oct 11 04:26:45 compute-0 podman[87053]: 2025-10-11 04:26:45.51212364 +0000 UTC m=+0.065973435 container create 31be82cbdb07033bb91b49a147abb8413aa0226c4100c66aedbbd858e080e638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate-test, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:45 compute-0 systemd[1]: Started libpod-conmon-31be82cbdb07033bb91b49a147abb8413aa0226c4100c66aedbbd858e080e638.scope.
Oct 11 04:26:45 compute-0 podman[87053]: 2025-10-11 04:26:45.492229524 +0000 UTC m=+0.046079369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8aea2067efae30019cdc2259ab97628e94a42bd6b6d4dcd1c7bb24443c85741/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8aea2067efae30019cdc2259ab97628e94a42bd6b6d4dcd1c7bb24443c85741/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8aea2067efae30019cdc2259ab97628e94a42bd6b6d4dcd1c7bb24443c85741/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8aea2067efae30019cdc2259ab97628e94a42bd6b6d4dcd1c7bb24443c85741/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8aea2067efae30019cdc2259ab97628e94a42bd6b6d4dcd1c7bb24443c85741/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:45 compute-0 podman[87053]: 2025-10-11 04:26:45.618007719 +0000 UTC m=+0.171857594 container init 31be82cbdb07033bb91b49a147abb8413aa0226c4100c66aedbbd858e080e638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate-test, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:26:45 compute-0 podman[87053]: 2025-10-11 04:26:45.637609378 +0000 UTC m=+0.191459203 container start 31be82cbdb07033bb91b49a147abb8413aa0226c4100c66aedbbd858e080e638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct 11 04:26:45 compute-0 podman[87053]: 2025-10-11 04:26:45.641969017 +0000 UTC m=+0.195818812 container attach 31be82cbdb07033bb91b49a147abb8413aa0226c4100c66aedbbd858e080e638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate-test, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:26:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:46 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate-test[87069]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 11 04:26:46 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate-test[87069]:                             [--no-systemd] [--no-tmpfs]
Oct 11 04:26:46 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate-test[87069]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 11 04:26:46 compute-0 systemd[1]: libpod-31be82cbdb07033bb91b49a147abb8413aa0226c4100c66aedbbd858e080e638.scope: Deactivated successfully.
Oct 11 04:26:46 compute-0 podman[87053]: 2025-10-11 04:26:46.312297645 +0000 UTC m=+0.866147510 container died 31be82cbdb07033bb91b49a147abb8413aa0226c4100c66aedbbd858e080e638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate-test, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8aea2067efae30019cdc2259ab97628e94a42bd6b6d4dcd1c7bb24443c85741-merged.mount: Deactivated successfully.
Oct 11 04:26:46 compute-0 podman[87053]: 2025-10-11 04:26:46.368605558 +0000 UTC m=+0.922455353 container remove 31be82cbdb07033bb91b49a147abb8413aa0226c4100c66aedbbd858e080e638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:26:46 compute-0 systemd[1]: libpod-conmon-31be82cbdb07033bb91b49a147abb8413aa0226c4100c66aedbbd858e080e638.scope: Deactivated successfully.
Oct 11 04:26:46 compute-0 systemd[1]: Reloading.
Oct 11 04:26:46 compute-0 systemd-rc-local-generator[87134]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:26:46 compute-0 systemd-sysv-generator[87138]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:26:46 compute-0 systemd[1]: Reloading.
Oct 11 04:26:47 compute-0 systemd-rc-local-generator[87175]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:26:47 compute-0 systemd-sysv-generator[87179]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:26:47 compute-0 systemd[1]: Starting Ceph osd.0 for 166d0489-2ae7-59eb-961c-c1b5cda4b45a...
Oct 11 04:26:47 compute-0 ceph-mon[74243]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:47 compute-0 podman[87234]: 2025-10-11 04:26:47.556894596 +0000 UTC m=+0.070450537 container create daac01efaf80aebba80b85287b8bf777bfbdb5066e510b95c9fbb0959f15b25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:26:47 compute-0 podman[87234]: 2025-10-11 04:26:47.529177225 +0000 UTC m=+0.042733206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12cd260f35861969efa6513c517794692d6f81c97adc43571e0064258753c6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12cd260f35861969efa6513c517794692d6f81c97adc43571e0064258753c6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12cd260f35861969efa6513c517794692d6f81c97adc43571e0064258753c6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12cd260f35861969efa6513c517794692d6f81c97adc43571e0064258753c6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12cd260f35861969efa6513c517794692d6f81c97adc43571e0064258753c6d/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:47 compute-0 podman[87234]: 2025-10-11 04:26:47.649148456 +0000 UTC m=+0.162704457 container init daac01efaf80aebba80b85287b8bf777bfbdb5066e510b95c9fbb0959f15b25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:26:47 compute-0 podman[87234]: 2025-10-11 04:26:47.665796971 +0000 UTC m=+0.179352922 container start daac01efaf80aebba80b85287b8bf777bfbdb5066e510b95c9fbb0959f15b25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:47 compute-0 podman[87234]: 2025-10-11 04:26:47.67098918 +0000 UTC m=+0.184545121 container attach daac01efaf80aebba80b85287b8bf777bfbdb5066e510b95c9fbb0959f15b25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 04:26:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:48 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate[87249]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 11 04:26:48 compute-0 bash[87234]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 11 04:26:48 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate[87249]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 11 04:26:48 compute-0 bash[87234]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 11 04:26:48 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate[87249]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 11 04:26:48 compute-0 bash[87234]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 11 04:26:48 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate[87249]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 11 04:26:48 compute-0 bash[87234]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 11 04:26:48 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate[87249]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 11 04:26:48 compute-0 bash[87234]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 11 04:26:48 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate[87249]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 11 04:26:48 compute-0 bash[87234]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 11 04:26:48 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate[87249]: --> ceph-volume raw activate successful for osd ID: 0
Oct 11 04:26:48 compute-0 bash[87234]: --> ceph-volume raw activate successful for osd ID: 0
Oct 11 04:26:48 compute-0 systemd[1]: libpod-daac01efaf80aebba80b85287b8bf777bfbdb5066e510b95c9fbb0959f15b25e.scope: Deactivated successfully.
Oct 11 04:26:48 compute-0 podman[87234]: 2025-10-11 04:26:48.929157949 +0000 UTC m=+1.442713900 container died daac01efaf80aebba80b85287b8bf777bfbdb5066e510b95c9fbb0959f15b25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:48 compute-0 systemd[1]: libpod-daac01efaf80aebba80b85287b8bf777bfbdb5066e510b95c9fbb0959f15b25e.scope: Consumed 1.285s CPU time.
Oct 11 04:26:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a12cd260f35861969efa6513c517794692d6f81c97adc43571e0064258753c6d-merged.mount: Deactivated successfully.
Oct 11 04:26:49 compute-0 podman[87234]: 2025-10-11 04:26:49.008618099 +0000 UTC m=+1.522174010 container remove daac01efaf80aebba80b85287b8bf777bfbdb5066e510b95c9fbb0959f15b25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:26:49 compute-0 podman[87439]: 2025-10-11 04:26:49.281295036 +0000 UTC m=+0.045212048 container create b9a5ba75457686df663c3ff7704fe3024df1b2fffa59e0eeb952383d6ccc8061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:26:49 compute-0 ceph-mon[74243]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9e8d6a3a78dbc13c8ab8ba4001883c03570522aad07607c4517d435170d747/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9e8d6a3a78dbc13c8ab8ba4001883c03570522aad07607c4517d435170d747/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9e8d6a3a78dbc13c8ab8ba4001883c03570522aad07607c4517d435170d747/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9e8d6a3a78dbc13c8ab8ba4001883c03570522aad07607c4517d435170d747/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9e8d6a3a78dbc13c8ab8ba4001883c03570522aad07607c4517d435170d747/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:49 compute-0 podman[87439]: 2025-10-11 04:26:49.345302861 +0000 UTC m=+0.109219883 container init b9a5ba75457686df663c3ff7704fe3024df1b2fffa59e0eeb952383d6ccc8061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:26:49 compute-0 podman[87439]: 2025-10-11 04:26:49.356421208 +0000 UTC m=+0.120338220 container start b9a5ba75457686df663c3ff7704fe3024df1b2fffa59e0eeb952383d6ccc8061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:26:49 compute-0 podman[87439]: 2025-10-11 04:26:49.263100562 +0000 UTC m=+0.027017614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:49 compute-0 bash[87439]: b9a5ba75457686df663c3ff7704fe3024df1b2fffa59e0eeb952383d6ccc8061
Oct 11 04:26:49 compute-0 systemd[1]: Started Ceph osd.0 for 166d0489-2ae7-59eb-961c-c1b5cda4b45a.
Oct 11 04:26:49 compute-0 ceph-osd[87458]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 04:26:49 compute-0 ceph-osd[87458]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 11 04:26:49 compute-0 ceph-osd[87458]: pidfile_write: ignore empty --pid-file
Oct 11 04:26:49 compute-0 ceph-osd[87458]: bdev(0x55660e2fb800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 04:26:49 compute-0 ceph-osd[87458]: bdev(0x55660e2fb800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 04:26:49 compute-0 ceph-osd[87458]: bdev(0x55660e2fb800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:26:49 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:26:49 compute-0 ceph-osd[87458]: bdev(0x55660f13d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 04:26:49 compute-0 ceph-osd[87458]: bdev(0x55660f13d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 04:26:49 compute-0 ceph-osd[87458]: bdev(0x55660f13d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:26:49 compute-0 ceph-osd[87458]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 11 04:26:49 compute-0 ceph-osd[87458]: bdev(0x55660f13d800 /var/lib/ceph/osd/ceph-0/block) close
Oct 11 04:26:49 compute-0 sudo[86940]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:26:49 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:26:49 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Oct 11 04:26:49 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 11 04:26:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:26:49 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:49 compute-0 ceph-mgr[74542]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Oct 11 04:26:49 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Oct 11 04:26:49 compute-0 sudo[87471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:49 compute-0 sudo[87471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:49 compute-0 sudo[87471]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:49 compute-0 sudo[87496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:49 compute-0 sudo[87496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:49 compute-0 sudo[87496]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:49 compute-0 sudo[87521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:49 compute-0 sudo[87521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:49 compute-0 sudo[87521]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:49 compute-0 ceph-osd[87458]: bdev(0x55660e2fb800 /var/lib/ceph/osd/ceph-0/block) close
Oct 11 04:26:49 compute-0 sudo[87546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:26:49 compute-0 sudo[87546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:49 compute-0 ceph-osd[87458]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct 11 04:26:49 compute-0 ceph-osd[87458]: load: jerasure load: lrc 
Oct 11 04:26:49 compute-0 ceph-osd[87458]: bdev(0x55660f1bec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 04:26:49 compute-0 ceph-osd[87458]: bdev(0x55660f1bec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 04:26:49 compute-0 ceph-osd[87458]: bdev(0x55660f1bec00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:26:49 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:26:49 compute-0 ceph-osd[87458]: bdev(0x55660f1bec00 /var/lib/ceph/osd/ceph-0/block) close
Oct 11 04:26:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:50 compute-0 podman[87618]: 2025-10-11 04:26:50.063509233 +0000 UTC m=+0.046764017 container create b9eef48e0981e6e9a2742eada447f05107701a1f9bfd007e991c0e94a302b6fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:50 compute-0 systemd[1]: Started libpod-conmon-b9eef48e0981e6e9a2742eada447f05107701a1f9bfd007e991c0e94a302b6fc.scope.
Oct 11 04:26:50 compute-0 podman[87618]: 2025-10-11 04:26:50.043933005 +0000 UTC m=+0.027187769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:50 compute-0 podman[87618]: 2025-10-11 04:26:50.173502684 +0000 UTC m=+0.156757508 container init b9eef48e0981e6e9a2742eada447f05107701a1f9bfd007e991c0e94a302b6fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hellman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:50 compute-0 podman[87618]: 2025-10-11 04:26:50.185214446 +0000 UTC m=+0.168469190 container start b9eef48e0981e6e9a2742eada447f05107701a1f9bfd007e991c0e94a302b6fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hellman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:50 compute-0 podman[87618]: 2025-10-11 04:26:50.188826546 +0000 UTC m=+0.172081330 container attach b9eef48e0981e6e9a2742eada447f05107701a1f9bfd007e991c0e94a302b6fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hellman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:50 compute-0 laughing_hellman[87634]: 167 167
Oct 11 04:26:50 compute-0 systemd[1]: libpod-b9eef48e0981e6e9a2742eada447f05107701a1f9bfd007e991c0e94a302b6fc.scope: Deactivated successfully.
Oct 11 04:26:50 compute-0 podman[87618]: 2025-10-11 04:26:50.193842271 +0000 UTC m=+0.177097015 container died b9eef48e0981e6e9a2742eada447f05107701a1f9bfd007e991c0e94a302b6fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bdev(0x55660f1bec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bdev(0x55660f1bec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bdev(0x55660f1bec00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bdev(0x55660f1bec00 /var/lib/ceph/osd/ceph-0/block) close
Oct 11 04:26:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d64da09fbc4fdd73fbb49dcb2cad4c9f24f5f8c0b9209dfcb6917f421ca7b8b4-merged.mount: Deactivated successfully.
Oct 11 04:26:50 compute-0 podman[87618]: 2025-10-11 04:26:50.241845698 +0000 UTC m=+0.225100482 container remove b9eef48e0981e6e9a2742eada447f05107701a1f9bfd007e991c0e94a302b6fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:26:50 compute-0 systemd[1]: libpod-conmon-b9eef48e0981e6e9a2742eada447f05107701a1f9bfd007e991c0e94a302b6fc.scope: Deactivated successfully.
Oct 11 04:26:50 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:50 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:50 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 11 04:26:50 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:50 compute-0 ceph-mon[74243]: Deploying daemon osd.1 on compute-0
Oct 11 04:26:50 compute-0 ceph-mon[74243]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:50 compute-0 ceph-osd[87458]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 11 04:26:50 compute-0 ceph-osd[87458]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bdev(0x55660f1bec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bdev(0x55660f1bec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bdev(0x55660f1bec00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bdev(0x55660f1bf400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bdev(0x55660f1bf400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bdev(0x55660f1bf400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluefs mount
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluefs mount shared_bdev_used = 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: RocksDB version: 7.9.2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Git sha 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: DB SUMMARY
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: DB Session ID:  SDUJAC92PGWNQRZS2176
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: CURRENT file:  CURRENT
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                         Options.error_if_exists: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.create_if_missing: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                                     Options.env: 0x55660f18fc70
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                                Options.info_log: 0x55660e3828a0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                              Options.statistics: (nil)
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.use_fsync: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                              Options.db_log_dir: 
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                                 Options.wal_dir: db.wal
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.write_buffer_manager: 0x55660f298460
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.unordered_write: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.row_cache: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                              Options.wal_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.two_write_queues: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.wal_compression: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.atomic_flush: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.max_background_jobs: 4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.max_background_compactions: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.max_subcompactions: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.max_open_files: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Compression algorithms supported:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         kZSTD supported: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         kXpressCompression supported: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         kBZip2Compression supported: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         kLZ4Compression supported: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         kZlibCompression supported: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         kLZ4HCCompression supported: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         kSnappyCompression supported: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55660e3822c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55660e36f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55660e3822c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55660e36f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55660e3822c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55660e36f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55660e3822c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55660e36f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55660e3822c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55660e36f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55660e3822c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55660e36f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55660e3822c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55660e36f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55660e382240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55660e36f090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55660e382240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55660e36f090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55660e382240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55660e36f090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a5814e94-8e66-467a-bcd3-995f1afe2e84
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156810546806, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156810547016, "job": 1, "event": "recovery_finished"}
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: freelist init
Oct 11 04:26:50 compute-0 ceph-osd[87458]: freelist _read_cfg
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluefs umount
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bdev(0x55660f1bf400 /var/lib/ceph/osd/ceph-0/block) close
Oct 11 04:26:50 compute-0 podman[87864]: 2025-10-11 04:26:50.614292061 +0000 UTC m=+0.051634408 container create 408fbafde4e2be351b1d8b975287d0ce6c81dcd315c425d26710b45396f2b0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:50 compute-0 systemd[1]: Started libpod-conmon-408fbafde4e2be351b1d8b975287d0ce6c81dcd315c425d26710b45396f2b0dd.scope.
Oct 11 04:26:50 compute-0 podman[87864]: 2025-10-11 04:26:50.58939118 +0000 UTC m=+0.026733607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0740abe1367ad1713c152fc515f7a68b6c4ac9568950575791c88eb79459f22b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0740abe1367ad1713c152fc515f7a68b6c4ac9568950575791c88eb79459f22b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0740abe1367ad1713c152fc515f7a68b6c4ac9568950575791c88eb79459f22b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0740abe1367ad1713c152fc515f7a68b6c4ac9568950575791c88eb79459f22b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0740abe1367ad1713c152fc515f7a68b6c4ac9568950575791c88eb79459f22b/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:50 compute-0 podman[87864]: 2025-10-11 04:26:50.721561405 +0000 UTC m=+0.158903842 container init 408fbafde4e2be351b1d8b975287d0ce6c81dcd315c425d26710b45396f2b0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate-test, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:26:50 compute-0 podman[87864]: 2025-10-11 04:26:50.736437305 +0000 UTC m=+0.173779682 container start 408fbafde4e2be351b1d8b975287d0ce6c81dcd315c425d26710b45396f2b0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:50 compute-0 podman[87864]: 2025-10-11 04:26:50.740863476 +0000 UTC m=+0.178205853 container attach 408fbafde4e2be351b1d8b975287d0ce6c81dcd315c425d26710b45396f2b0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate-test, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bdev(0x55660f1bf400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bdev(0x55660f1bf400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bdev(0x55660f1bf400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluefs mount
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluefs mount shared_bdev_used = 4718592
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: RocksDB version: 7.9.2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Git sha 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: DB SUMMARY
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: DB Session ID:  SDUJAC92PGWNQRZS2177
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: CURRENT file:  CURRENT
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                         Options.error_if_exists: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.create_if_missing: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                                     Options.env: 0x55660f340ee0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                                Options.info_log: 0x55660f18b680
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                              Options.statistics: (nil)
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.use_fsync: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                              Options.db_log_dir: 
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                                 Options.wal_dir: db.wal
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.write_buffer_manager: 0x55660f2986e0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.unordered_write: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.row_cache: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                              Options.wal_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.two_write_queues: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.wal_compression: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.atomic_flush: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.max_background_jobs: 4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.max_background_compactions: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.max_subcompactions: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.max_open_files: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Compression algorithms supported:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         kZSTD supported: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         kXpressCompression supported: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         kBZip2Compression supported: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         kLZ4Compression supported: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         kZlibCompression supported: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         kLZ4HCCompression supported: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         kSnappyCompression supported: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55660e379060)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55660e36f4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55660e379060)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55660e36f4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55660e379060)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55660e36f4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55660e379060)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55660e36f4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55660e379060)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55660e36f4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55660e379060)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55660e36f4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55660e379060)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55660e36f4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55660f18b460)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55660e36f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55660f18b460)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55660e36f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55660f18b460)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55660e36f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a5814e94-8e66-467a-bcd3-995f1afe2e84
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156810802986, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156810809674, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156810, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5814e94-8e66-467a-bcd3-995f1afe2e84", "db_session_id": "SDUJAC92PGWNQRZS2177", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156810813248, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156810, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5814e94-8e66-467a-bcd3-995f1afe2e84", "db_session_id": "SDUJAC92PGWNQRZS2177", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156810816766, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156810, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5814e94-8e66-467a-bcd3-995f1afe2e84", "db_session_id": "SDUJAC92PGWNQRZS2177", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156810818544, "job": 1, "event": "recovery_finished"}
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55660f34dc00
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: DB pointer 0x55660f281a00
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct 11 04:26:50 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:26:50 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 04:26:50 compute-0 ceph-osd[87458]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 11 04:26:50 compute-0 ceph-osd[87458]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 11 04:26:50 compute-0 ceph-osd[87458]: _get_class not permitted to load lua
Oct 11 04:26:50 compute-0 ceph-osd[87458]: _get_class not permitted to load sdk
Oct 11 04:26:50 compute-0 ceph-osd[87458]: _get_class not permitted to load test_remote_reads
Oct 11 04:26:50 compute-0 ceph-osd[87458]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 11 04:26:50 compute-0 ceph-osd[87458]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 11 04:26:50 compute-0 ceph-osd[87458]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 11 04:26:50 compute-0 ceph-osd[87458]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 11 04:26:50 compute-0 ceph-osd[87458]: osd.0 0 load_pgs
Oct 11 04:26:50 compute-0 ceph-osd[87458]: osd.0 0 load_pgs opened 0 pgs
Oct 11 04:26:50 compute-0 ceph-osd[87458]: osd.0 0 log_to_monitors true
Oct 11 04:26:50 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0[87454]: 2025-10-11T04:26:50.860+0000 7f42c54f8740 -1 osd.0 0 log_to_monitors true
Oct 11 04:26:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Oct 11 04:26:50 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3630572390,v1:192.168.122.100:6803/3630572390]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 11 04:26:51 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate-test[87880]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 11 04:26:51 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate-test[87880]:                             [--no-systemd] [--no-tmpfs]
Oct 11 04:26:51 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate-test[87880]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 11 04:26:51 compute-0 systemd[1]: libpod-408fbafde4e2be351b1d8b975287d0ce6c81dcd315c425d26710b45396f2b0dd.scope: Deactivated successfully.
Oct 11 04:26:51 compute-0 podman[87864]: 2025-10-11 04:26:51.350618913 +0000 UTC m=+0.787961290 container died 408fbafde4e2be351b1d8b975287d0ce6c81dcd315c425d26710b45396f2b0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate-test, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-0740abe1367ad1713c152fc515f7a68b6c4ac9568950575791c88eb79459f22b-merged.mount: Deactivated successfully.
Oct 11 04:26:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct 11 04:26:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:26:51 compute-0 podman[87864]: 2025-10-11 04:26:51.432524384 +0000 UTC m=+0.869866721 container remove 408fbafde4e2be351b1d8b975287d0ce6c81dcd315c425d26710b45396f2b0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate-test, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 04:26:51 compute-0 ceph-mon[74243]: from='osd.0 [v2:192.168.122.100:6802/3630572390,v1:192.168.122.100:6803/3630572390]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 11 04:26:51 compute-0 systemd[1]: libpod-conmon-408fbafde4e2be351b1d8b975287d0ce6c81dcd315c425d26710b45396f2b0dd.scope: Deactivated successfully.
Oct 11 04:26:51 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3630572390,v1:192.168.122.100:6803/3630572390]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 11 04:26:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Oct 11 04:26:51 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Oct 11 04:26:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:26:51 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:26:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 11 04:26:51 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3630572390,v1:192.168.122.100:6803/3630572390]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 11 04:26:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 11 04:26:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:26:51 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:26:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:26:51 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:26:51 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 04:26:51 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:26:51 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:26:51 compute-0 systemd[1]: Reloading.
Oct 11 04:26:51 compute-0 systemd-rc-local-generator[88153]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:26:51 compute-0 systemd-sysv-generator[88160]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:26:51 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 11 04:26:51 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 11 04:26:52 compute-0 systemd[1]: Reloading.
Oct 11 04:26:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:52 compute-0 systemd-sysv-generator[88199]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:26:52 compute-0 systemd-rc-local-generator[88196]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:26:52 compute-0 systemd[1]: Starting Ceph osd.1 for 166d0489-2ae7-59eb-961c-c1b5cda4b45a...
Oct 11 04:26:52 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct 11 04:26:52 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:26:52 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3630572390,v1:192.168.122.100:6803/3630572390]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 11 04:26:52 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Oct 11 04:26:52 compute-0 ceph-osd[87458]: osd.0 0 done with init, starting boot process
Oct 11 04:26:52 compute-0 ceph-osd[87458]: osd.0 0 start_boot
Oct 11 04:26:52 compute-0 ceph-osd[87458]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 11 04:26:52 compute-0 ceph-osd[87458]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 11 04:26:52 compute-0 ceph-osd[87458]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 11 04:26:52 compute-0 ceph-osd[87458]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 11 04:26:52 compute-0 ceph-osd[87458]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct 11 04:26:52 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Oct 11 04:26:52 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:26:52 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:26:52 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:26:52 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:26:52 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:26:52 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:26:52 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 04:26:52 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:26:52 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:26:52 compute-0 ceph-mon[74243]: from='osd.0 [v2:192.168.122.100:6802/3630572390,v1:192.168.122.100:6803/3630572390]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 11 04:26:52 compute-0 ceph-mon[74243]: osdmap e7: 3 total, 0 up, 3 in
Oct 11 04:26:52 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:26:52 compute-0 ceph-mon[74243]: from='osd.0 [v2:192.168.122.100:6802/3630572390,v1:192.168.122.100:6803/3630572390]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 11 04:26:52 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:26:52 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:26:52 compute-0 ceph-mon[74243]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:52 compute-0 ceph-mgr[74542]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3630572390; not ready for session (expect reconnect)
Oct 11 04:26:52 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:26:52 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:26:52 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 04:26:52 compute-0 podman[88256]: 2025-10-11 04:26:52.649042846 +0000 UTC m=+0.051916205 container create 220075492b5c8dd6f282111bfeba329d7eba32a5df18ada01be3c0612f55e5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:26:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df78c3cdd5701ccd8bcd6a8684f67d11e98e464c57c05aa63b844eeb7e0b065/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:52 compute-0 podman[88256]: 2025-10-11 04:26:52.619760816 +0000 UTC m=+0.022634285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df78c3cdd5701ccd8bcd6a8684f67d11e98e464c57c05aa63b844eeb7e0b065/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df78c3cdd5701ccd8bcd6a8684f67d11e98e464c57c05aa63b844eeb7e0b065/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df78c3cdd5701ccd8bcd6a8684f67d11e98e464c57c05aa63b844eeb7e0b065/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df78c3cdd5701ccd8bcd6a8684f67d11e98e464c57c05aa63b844eeb7e0b065/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:52 compute-0 podman[88256]: 2025-10-11 04:26:52.746990567 +0000 UTC m=+0.149863936 container init 220075492b5c8dd6f282111bfeba329d7eba32a5df18ada01be3c0612f55e5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:52 compute-0 podman[88256]: 2025-10-11 04:26:52.758391802 +0000 UTC m=+0.161265171 container start 220075492b5c8dd6f282111bfeba329d7eba32a5df18ada01be3c0612f55e5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:26:52 compute-0 podman[88256]: 2025-10-11 04:26:52.766881163 +0000 UTC m=+0.169754542 container attach 220075492b5c8dd6f282111bfeba329d7eba32a5df18ada01be3c0612f55e5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:26:53 compute-0 ceph-mgr[74542]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3630572390; not ready for session (expect reconnect)
Oct 11 04:26:53 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:26:53 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:26:53 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 04:26:53 compute-0 ceph-mon[74243]: from='osd.0 [v2:192.168.122.100:6802/3630572390,v1:192.168.122.100:6803/3630572390]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 11 04:26:53 compute-0 ceph-mon[74243]: osdmap e8: 3 total, 0 up, 3 in
Oct 11 04:26:53 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:26:53 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:26:53 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:26:53 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:26:53 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate[88271]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 11 04:26:53 compute-0 bash[88256]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 11 04:26:53 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate[88271]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct 11 04:26:53 compute-0 bash[88256]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct 11 04:26:53 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate[88271]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct 11 04:26:53 compute-0 bash[88256]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct 11 04:26:53 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate[88271]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 11 04:26:53 compute-0 bash[88256]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 11 04:26:53 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate[88271]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 11 04:26:53 compute-0 bash[88256]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 11 04:26:53 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate[88271]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 11 04:26:53 compute-0 bash[88256]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 11 04:26:53 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate[88271]: --> ceph-volume raw activate successful for osd ID: 1
Oct 11 04:26:53 compute-0 bash[88256]: --> ceph-volume raw activate successful for osd ID: 1
Oct 11 04:26:53 compute-0 systemd[1]: libpod-220075492b5c8dd6f282111bfeba329d7eba32a5df18ada01be3c0612f55e5d1.scope: Deactivated successfully.
Oct 11 04:26:53 compute-0 systemd[1]: libpod-220075492b5c8dd6f282111bfeba329d7eba32a5df18ada01be3c0612f55e5d1.scope: Consumed 1.155s CPU time.
Oct 11 04:26:53 compute-0 podman[88256]: 2025-10-11 04:26:53.901447002 +0000 UTC m=+1.304320401 container died 220075492b5c8dd6f282111bfeba329d7eba32a5df18ada01be3c0612f55e5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:26:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0df78c3cdd5701ccd8bcd6a8684f67d11e98e464c57c05aa63b844eeb7e0b065-merged.mount: Deactivated successfully.
Oct 11 04:26:54 compute-0 podman[88256]: 2025-10-11 04:26:54.006446829 +0000 UTC m=+1.409320198 container remove 220075492b5c8dd6f282111bfeba329d7eba32a5df18ada01be3c0612f55e5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1-activate, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:26:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:54 compute-0 podman[88448]: 2025-10-11 04:26:54.307160745 +0000 UTC m=+0.062361516 container create f8a086cc51d42a10b91c624485bb78f6a25eba9fc3f76bfd6e0166cb4162ee1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4e5c8673078c41048a3903808c500ad28fc91ccaaacaaa46ca699e518a3de5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4e5c8673078c41048a3903808c500ad28fc91ccaaacaaa46ca699e518a3de5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4e5c8673078c41048a3903808c500ad28fc91ccaaacaaa46ca699e518a3de5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4e5c8673078c41048a3903808c500ad28fc91ccaaacaaa46ca699e518a3de5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4e5c8673078c41048a3903808c500ad28fc91ccaaacaaa46ca699e518a3de5/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:54 compute-0 podman[88448]: 2025-10-11 04:26:54.273399233 +0000 UTC m=+0.028600004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:54 compute-0 podman[88448]: 2025-10-11 04:26:54.389004065 +0000 UTC m=+0.144204886 container init f8a086cc51d42a10b91c624485bb78f6a25eba9fc3f76bfd6e0166cb4162ee1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:26:54 compute-0 podman[88448]: 2025-10-11 04:26:54.399292231 +0000 UTC m=+0.154493002 container start f8a086cc51d42a10b91c624485bb78f6a25eba9fc3f76bfd6e0166cb4162ee1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:54 compute-0 bash[88448]: f8a086cc51d42a10b91c624485bb78f6a25eba9fc3f76bfd6e0166cb4162ee1e
Oct 11 04:26:54 compute-0 systemd[1]: Started Ceph osd.1 for 166d0489-2ae7-59eb-961c-c1b5cda4b45a.
Oct 11 04:26:54 compute-0 ceph-osd[88467]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 04:26:54 compute-0 ceph-osd[88467]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 11 04:26:54 compute-0 ceph-osd[88467]: pidfile_write: ignore empty --pid-file
Oct 11 04:26:54 compute-0 ceph-osd[88467]: bdev(0x564464023800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 04:26:54 compute-0 ceph-osd[88467]: bdev(0x564464023800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 04:26:54 compute-0 ceph-osd[88467]: bdev(0x564464023800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:26:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:26:54 compute-0 ceph-osd[88467]: bdev(0x564464e5b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 04:26:54 compute-0 ceph-osd[88467]: bdev(0x564464e5b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 04:26:54 compute-0 ceph-osd[88467]: bdev(0x564464e5b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:26:54 compute-0 ceph-osd[88467]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 11 04:26:54 compute-0 ceph-osd[88467]: bdev(0x564464e5b800 /var/lib/ceph/osd/ceph-1/block) close
Oct 11 04:26:54 compute-0 ceph-mgr[74542]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3630572390; not ready for session (expect reconnect)
Oct 11 04:26:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:26:54 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:26:54 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 04:26:54 compute-0 sudo[87546]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:26:54 compute-0 ceph-mon[74243]: purged_snaps scrub starts
Oct 11 04:26:54 compute-0 ceph-mon[74243]: purged_snaps scrub ok
Oct 11 04:26:54 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:26:54 compute-0 ceph-mon[74243]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:54 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:26:54 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:26:54 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Oct 11 04:26:54 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 11 04:26:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:26:54 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:54 compute-0 ceph-mgr[74542]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Oct 11 04:26:54 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Oct 11 04:26:54 compute-0 sudo[88480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:54 compute-0 sudo[88480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:54 compute-0 sudo[88480]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:54 compute-0 sudo[88505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:54 compute-0 sudo[88505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:54 compute-0 sudo[88505]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:54 compute-0 ceph-osd[88467]: bdev(0x564464023800 /var/lib/ceph/osd/ceph-1/block) close
Oct 11 04:26:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:54 compute-0 sudo[88530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:54 compute-0 sudo[88530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:54 compute-0 sudo[88530]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:54 compute-0 sudo[88557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:26:54 compute-0 sudo[88557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:54 compute-0 ceph-osd[88467]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct 11 04:26:54 compute-0 ceph-osd[88467]: load: jerasure load: lrc 
Oct 11 04:26:54 compute-0 ceph-osd[88467]: bdev(0x564464edcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 04:26:54 compute-0 ceph-osd[88467]: bdev(0x564464edcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 04:26:54 compute-0 ceph-osd[88467]: bdev(0x564464edcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:26:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:26:54 compute-0 ceph-osd[88467]: bdev(0x564464edcc00 /var/lib/ceph/osd/ceph-1/block) close
Oct 11 04:26:55 compute-0 podman[88626]: 2025-10-11 04:26:55.19086771 +0000 UTC m=+0.037826224 container create b66005d04f9455517917bec7d23711dc22822f9198caa342c2d6448075ee9557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 04:26:55 compute-0 systemd[1]: Started libpod-conmon-b66005d04f9455517917bec7d23711dc22822f9198caa342c2d6448075ee9557.scope.
Oct 11 04:26:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bdev(0x564464edcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bdev(0x564464edcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bdev(0x564464edcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bdev(0x564464edcc00 /var/lib/ceph/osd/ceph-1/block) close
Oct 11 04:26:55 compute-0 podman[88626]: 2025-10-11 04:26:55.173653471 +0000 UTC m=+0.020612015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:55 compute-0 podman[88626]: 2025-10-11 04:26:55.280831753 +0000 UTC m=+0.127790277 container init b66005d04f9455517917bec7d23711dc22822f9198caa342c2d6448075ee9557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:26:55 compute-0 podman[88626]: 2025-10-11 04:26:55.287345275 +0000 UTC m=+0.134303779 container start b66005d04f9455517917bec7d23711dc22822f9198caa342c2d6448075ee9557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:26:55 compute-0 podman[88626]: 2025-10-11 04:26:55.292402201 +0000 UTC m=+0.139360705 container attach b66005d04f9455517917bec7d23711dc22822f9198caa342c2d6448075ee9557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:55 compute-0 boring_northcutt[88642]: 167 167
Oct 11 04:26:55 compute-0 podman[88626]: 2025-10-11 04:26:55.293771945 +0000 UTC m=+0.140730449 container died b66005d04f9455517917bec7d23711dc22822f9198caa342c2d6448075ee9557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:55 compute-0 systemd[1]: libpod-b66005d04f9455517917bec7d23711dc22822f9198caa342c2d6448075ee9557.scope: Deactivated successfully.
Oct 11 04:26:55 compute-0 ceph-osd[87458]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 32.904 iops: 8423.300 elapsed_sec: 0.356
Oct 11 04:26:55 compute-0 ceph-osd[87458]: log_channel(cluster) log [WRN] : OSD bench result of 8423.299601 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 11 04:26:55 compute-0 ceph-osd[87458]: osd.0 0 waiting for initial osdmap
Oct 11 04:26:55 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0[87454]: 2025-10-11T04:26:55.294+0000 7f42c1478640 -1 osd.0 0 waiting for initial osdmap
Oct 11 04:26:55 compute-0 ceph-osd[87458]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct 11 04:26:55 compute-0 ceph-osd[87458]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct 11 04:26:55 compute-0 ceph-osd[87458]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct 11 04:26:55 compute-0 ceph-osd[87458]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Oct 11 04:26:55 compute-0 ceph-osd[87458]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 11 04:26:55 compute-0 ceph-osd[87458]: osd.0 8 set_numa_affinity not setting numa affinity
Oct 11 04:26:55 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-0[87454]: 2025-10-11T04:26:55.317+0000 7f42bcaa0640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 11 04:26:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf06a12694843b52cd9a87b96f65796b7dda5149d9559c18ad8c7fbecfbb1ea6-merged.mount: Deactivated successfully.
Oct 11 04:26:55 compute-0 ceph-osd[87458]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Oct 11 04:26:55 compute-0 podman[88626]: 2025-10-11 04:26:55.336823438 +0000 UTC m=+0.183781942 container remove b66005d04f9455517917bec7d23711dc22822f9198caa342c2d6448075ee9557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:55 compute-0 systemd[1]: libpod-conmon-b66005d04f9455517917bec7d23711dc22822f9198caa342c2d6448075ee9557.scope: Deactivated successfully.
Oct 11 04:26:55 compute-0 ceph-mgr[74542]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3630572390; not ready for session (expect reconnect)
Oct 11 04:26:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:26:55 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:26:55 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 11 04:26:55 compute-0 ceph-osd[88467]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bdev(0x564464edcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bdev(0x564464edcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bdev(0x564464edcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bdev(0x564464edd400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bdev(0x564464edd400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bdev(0x564464edd400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluefs mount
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluefs mount shared_bdev_used = 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 11 04:26:55 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:55 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:55 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 11 04:26:55 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:55 compute-0 ceph-mon[74243]: Deploying daemon osd.2 on compute-0
Oct 11 04:26:55 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: RocksDB version: 7.9.2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Git sha 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: DB SUMMARY
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: DB Session ID:  FP8HWSKEE2V0B36VL1I2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: CURRENT file:  CURRENT
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                         Options.error_if_exists: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.create_if_missing: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                                     Options.env: 0x564464eadc70
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                                Options.info_log: 0x5644640aa8a0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                              Options.statistics: (nil)
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.use_fsync: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                              Options.db_log_dir: 
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                                 Options.wal_dir: db.wal
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.write_buffer_manager: 0x564464fb6460
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.unordered_write: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.row_cache: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                              Options.wal_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.two_write_queues: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.wal_compression: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.atomic_flush: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.max_background_jobs: 4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.max_background_compactions: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.max_subcompactions: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.max_open_files: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Compression algorithms supported:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         kZSTD supported: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         kXpressCompression supported: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         kBZip2Compression supported: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         kLZ4Compression supported: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         kZlibCompression supported: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         kLZ4HCCompression supported: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         kSnappyCompression supported: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5644640aa2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5644640971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5644640aa2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5644640971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5644640aa2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5644640971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5644640aa2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5644640971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5644640aa2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5644640971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5644640aa2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5644640971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5644640aa2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5644640971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct 11 04:26:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5644640aa240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564464097090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5644640aa240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564464097090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5644640aa240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564464097090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a747590a-f839-416d-a8db-984c70f7eca4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156815569550, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156815569793, "job": 1, "event": "recovery_finished"}
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: freelist init
Oct 11 04:26:55 compute-0 ceph-osd[88467]: freelist _read_cfg
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluefs umount
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bdev(0x564464edd400 /var/lib/ceph/osd/ceph-1/block) close
Oct 11 04:26:55 compute-0 ceph-mon[74243]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3630572390,v1:192.168.122.100:6803/3630572390] boot
Oct 11 04:26:55 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Oct 11 04:26:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:26:55 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:26:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:26:55 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:26:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:26:55 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:26:55 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:26:55 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:26:55 compute-0 ceph-osd[87458]: osd.0 9 state: booting -> active
Oct 11 04:26:55 compute-0 podman[88691]: 2025-10-11 04:26:55.611794422 +0000 UTC m=+0.049095475 container create fcd69362de0ae7d578bb734cb60decfe4203ea32d688f32e816f96aa57dca88e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate-test, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:26:55 compute-0 systemd[1]: Started libpod-conmon-fcd69362de0ae7d578bb734cb60decfe4203ea32d688f32e816f96aa57dca88e.scope.
Oct 11 04:26:55 compute-0 podman[88691]: 2025-10-11 04:26:55.58564309 +0000 UTC m=+0.022944173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3428eb02bdd0e6d0d7b81e65025ac3de781a6f6fdaa5a6c3a28bde6502dce9a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3428eb02bdd0e6d0d7b81e65025ac3de781a6f6fdaa5a6c3a28bde6502dce9a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3428eb02bdd0e6d0d7b81e65025ac3de781a6f6fdaa5a6c3a28bde6502dce9a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3428eb02bdd0e6d0d7b81e65025ac3de781a6f6fdaa5a6c3a28bde6502dce9a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3428eb02bdd0e6d0d7b81e65025ac3de781a6f6fdaa5a6c3a28bde6502dce9a8/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:55 compute-0 podman[88691]: 2025-10-11 04:26:55.710767179 +0000 UTC m=+0.148068252 container init fcd69362de0ae7d578bb734cb60decfe4203ea32d688f32e816f96aa57dca88e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate-test, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:26:55 compute-0 podman[88691]: 2025-10-11 04:26:55.722906521 +0000 UTC m=+0.160207574 container start fcd69362de0ae7d578bb734cb60decfe4203ea32d688f32e816f96aa57dca88e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:26:55 compute-0 podman[88691]: 2025-10-11 04:26:55.727176648 +0000 UTC m=+0.164477701 container attach fcd69362de0ae7d578bb734cb60decfe4203ea32d688f32e816f96aa57dca88e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate-test, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bdev(0x564464edd400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bdev(0x564464edd400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bdev(0x564464edd400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluefs mount
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluefs mount shared_bdev_used = 4718592
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: RocksDB version: 7.9.2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Git sha 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: DB SUMMARY
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: DB Session ID:  FP8HWSKEE2V0B36VL1I3
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: CURRENT file:  CURRENT
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                         Options.error_if_exists: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.create_if_missing: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                                     Options.env: 0x56446505e460
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                                Options.info_log: 0x5644640aa300
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                              Options.statistics: (nil)
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.use_fsync: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                              Options.db_log_dir: 
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                                 Options.wal_dir: db.wal
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.write_buffer_manager: 0x564464fb6460
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.unordered_write: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.row_cache: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                              Options.wal_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.two_write_queues: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.wal_compression: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.atomic_flush: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.max_background_jobs: 4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.max_background_compactions: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.max_subcompactions: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.max_open_files: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Compression algorithms supported:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         kZSTD supported: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         kXpressCompression supported: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         kBZip2Compression supported: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         kLZ4Compression supported: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         kZlibCompression supported: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         kLZ4HCCompression supported: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         kSnappyCompression supported: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5644640aaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5644640971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5644640aaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5644640971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5644640aaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5644640971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5644640aaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5644640971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5644640aaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5644640971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5644640aaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5644640971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5644640aaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5644640971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5644640aa380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564464097090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5644640aa380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564464097090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:           Options.merge_operator: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5644640aa380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564464097090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.compression: LZ4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.num_levels: 7
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a747590a-f839-416d-a8db-984c70f7eca4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156815857650, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156815863652, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156815, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a747590a-f839-416d-a8db-984c70f7eca4", "db_session_id": "FP8HWSKEE2V0B36VL1I3", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156815869397, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156815, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a747590a-f839-416d-a8db-984c70f7eca4", "db_session_id": "FP8HWSKEE2V0B36VL1I3", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156815872300, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156815, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a747590a-f839-416d-a8db-984c70f7eca4", "db_session_id": "FP8HWSKEE2V0B36VL1I3", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156815874163, "job": 1, "event": "recovery_finished"}
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x564464204000
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: DB pointer 0x564464f9fa00
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Oct 11 04:26:55 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:26:55 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 04:26:55 compute-0 ceph-osd[88467]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 11 04:26:55 compute-0 ceph-osd[88467]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 11 04:26:55 compute-0 ceph-osd[88467]: _get_class not permitted to load lua
Oct 11 04:26:55 compute-0 ceph-osd[88467]: _get_class not permitted to load sdk
Oct 11 04:26:55 compute-0 ceph-osd[88467]: _get_class not permitted to load test_remote_reads
Oct 11 04:26:55 compute-0 ceph-osd[88467]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 11 04:26:55 compute-0 ceph-osd[88467]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 11 04:26:55 compute-0 ceph-osd[88467]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 11 04:26:55 compute-0 ceph-osd[88467]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 11 04:26:55 compute-0 ceph-osd[88467]: osd.1 0 load_pgs
Oct 11 04:26:55 compute-0 ceph-osd[88467]: osd.1 0 load_pgs opened 0 pgs
Oct 11 04:26:55 compute-0 ceph-osd[88467]: osd.1 0 log_to_monitors true
Oct 11 04:26:55 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1[88463]: 2025-10-11T04:26:55.911+0000 7fa5bf053740 -1 osd.1 0 log_to_monitors true
Oct 11 04:26:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Oct 11 04:26:55 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1912859281,v1:192.168.122.100:6807/1912859281]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 11 04:26:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:26:56
Oct 11 04:26:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:26:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:26:56 compute-0 ceph-mgr[74542]: [balancer INFO root] No pools available
Oct 11 04:26:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:26:56 compute-0 ceph-mgr[74542]: [devicehealth INFO root] creating mgr pool
Oct 11 04:26:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Oct 11 04:26:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 11 04:26:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:26:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:26:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:26:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:26:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:26:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:26:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:26:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:26:56 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate-test[88887]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 11 04:26:56 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate-test[88887]:                             [--no-systemd] [--no-tmpfs]
Oct 11 04:26:56 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate-test[88887]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 11 04:26:56 compute-0 systemd[1]: libpod-fcd69362de0ae7d578bb734cb60decfe4203ea32d688f32e816f96aa57dca88e.scope: Deactivated successfully.
Oct 11 04:26:56 compute-0 podman[88691]: 2025-10-11 04:26:56.410883649 +0000 UTC m=+0.848184742 container died fcd69362de0ae7d578bb734cb60decfe4203ea32d688f32e816f96aa57dca88e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate-test, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3428eb02bdd0e6d0d7b81e65025ac3de781a6f6fdaa5a6c3a28bde6502dce9a8-merged.mount: Deactivated successfully.
Oct 11 04:26:56 compute-0 podman[88691]: 2025-10-11 04:26:56.490240777 +0000 UTC m=+0.927541840 container remove fcd69362de0ae7d578bb734cb60decfe4203ea32d688f32e816f96aa57dca88e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:26:56 compute-0 systemd[1]: libpod-conmon-fcd69362de0ae7d578bb734cb60decfe4203ea32d688f32e816f96aa57dca88e.scope: Deactivated successfully.
Oct 11 04:26:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct 11 04:26:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:26:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1912859281,v1:192.168.122.100:6807/1912859281]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 11 04:26:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 11 04:26:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Oct 11 04:26:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Oct 11 04:26:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct 11 04:26:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct 11 04:26:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct 11 04:26:56 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Oct 11 04:26:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 11 04:26:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1912859281,v1:192.168.122.100:6807/1912859281]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 11 04:26:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e10 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 11 04:26:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:26:56 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:26:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:26:56 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:26:56 compute-0 ceph-osd[87458]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 11 04:26:56 compute-0 ceph-osd[87458]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct 11 04:26:56 compute-0 ceph-osd[87458]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 11 04:26:56 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:26:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Oct 11 04:26:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 11 04:26:56 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 10 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [0] r=0 lpr=10 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:26:56 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:26:56 compute-0 ceph-mon[74243]: OSD bench result of 8423.299601 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 11 04:26:56 compute-0 ceph-mon[74243]: osd.0 [v2:192.168.122.100:6802/3630572390,v1:192.168.122.100:6803/3630572390] boot
Oct 11 04:26:56 compute-0 ceph-mon[74243]: osdmap e9: 3 total, 1 up, 3 in
Oct 11 04:26:56 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:26:56 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:26:56 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:26:56 compute-0 ceph-mon[74243]: from='osd.1 [v2:192.168.122.100:6806/1912859281,v1:192.168.122.100:6807/1912859281]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 11 04:26:56 compute-0 ceph-mon[74243]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:26:56 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 11 04:26:56 compute-0 systemd[1]: Reloading.
Oct 11 04:26:56 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 11 04:26:56 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 11 04:26:56 compute-0 systemd-rc-local-generator[89166]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:26:56 compute-0 systemd-sysv-generator[89170]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:26:57 compute-0 systemd[1]: Reloading.
Oct 11 04:26:57 compute-0 systemd-sysv-generator[89212]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:26:57 compute-0 systemd-rc-local-generator[89209]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:26:57 compute-0 systemd[1]: Starting Ceph osd.2 for 166d0489-2ae7-59eb-961c-c1b5cda4b45a...
Oct 11 04:26:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct 11 04:26:57 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1912859281,v1:192.168.122.100:6807/1912859281]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 11 04:26:57 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 11 04:26:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Oct 11 04:26:57 compute-0 ceph-osd[88467]: osd.1 0 done with init, starting boot process
Oct 11 04:26:57 compute-0 ceph-osd[88467]: osd.1 0 start_boot
Oct 11 04:26:57 compute-0 ceph-osd[88467]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 11 04:26:57 compute-0 ceph-osd[88467]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 11 04:26:57 compute-0 ceph-osd[88467]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 11 04:26:57 compute-0 ceph-osd[88467]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 11 04:26:57 compute-0 ceph-osd[88467]: osd.1 0  bench count 12288000 bsize 4 KiB
Oct 11 04:26:57 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Oct 11 04:26:57 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 11 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=11) [] r=-1 lpr=11 pi=[10,11)/0 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:26:57 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 11 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=11) [] r=-1 lpr=11 pi=[10,11)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:26:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:26:57 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:26:57 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:26:57 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:26:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:26:57 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:26:57 compute-0 ceph-mon[74243]: from='osd.1 [v2:192.168.122.100:6806/1912859281,v1:192.168.122.100:6807/1912859281]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 11 04:26:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 11 04:26:57 compute-0 ceph-mon[74243]: osdmap e10: 3 total, 1 up, 3 in
Oct 11 04:26:57 compute-0 ceph-mon[74243]: from='osd.1 [v2:192.168.122.100:6806/1912859281,v1:192.168.122.100:6807/1912859281]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 11 04:26:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:26:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:26:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 11 04:26:57 compute-0 ceph-mgr[74542]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1912859281; not ready for session (expect reconnect)
Oct 11 04:26:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:26:57 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:26:57 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:26:57 compute-0 podman[89264]: 2025-10-11 04:26:57.856152262 +0000 UTC m=+0.057370301 container create 7ad900265c2e9c5c9611ba8ba015653eec5ce0c71e09976d0b148e0a3369158e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:26:57 compute-0 podman[89264]: 2025-10-11 04:26:57.827483678 +0000 UTC m=+0.028701747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:57 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65b3208d2e7c3fbf4ce614481354463aa0d7026bcbfd4d675a247763f08600e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65b3208d2e7c3fbf4ce614481354463aa0d7026bcbfd4d675a247763f08600e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65b3208d2e7c3fbf4ce614481354463aa0d7026bcbfd4d675a247763f08600e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65b3208d2e7c3fbf4ce614481354463aa0d7026bcbfd4d675a247763f08600e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65b3208d2e7c3fbf4ce614481354463aa0d7026bcbfd4d675a247763f08600e/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:57 compute-0 podman[89264]: 2025-10-11 04:26:57.983555048 +0000 UTC m=+0.184773107 container init 7ad900265c2e9c5c9611ba8ba015653eec5ce0c71e09976d0b148e0a3369158e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 04:26:58 compute-0 podman[89264]: 2025-10-11 04:26:57.999982717 +0000 UTC m=+0.201200796 container start 7ad900265c2e9c5c9611ba8ba015653eec5ce0c71e09976d0b148e0a3369158e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:26:58 compute-0 podman[89264]: 2025-10-11 04:26:58.008148101 +0000 UTC m=+0.209366140 container attach 7ad900265c2e9c5c9611ba8ba015653eec5ce0c71e09976d0b148e0a3369158e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:26:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v33: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 11 04:26:58 compute-0 ceph-mgr[74542]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1912859281; not ready for session (expect reconnect)
Oct 11 04:26:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:26:58 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:26:58 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:26:58 compute-0 ceph-mon[74243]: from='osd.1 [v2:192.168.122.100:6806/1912859281,v1:192.168.122.100:6807/1912859281]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 11 04:26:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 11 04:26:58 compute-0 ceph-mon[74243]: osdmap e11: 3 total, 1 up, 3 in
Oct 11 04:26:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:26:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:26:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:26:58 compute-0 ceph-mon[74243]: pgmap v33: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 11 04:26:58 compute-0 sudo[89310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvxnurtzuxiywkxamyogxgiyhujpjmsy ; /usr/bin/python3'
Oct 11 04:26:58 compute-0 sudo[89310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:26:58 compute-0 python3[89314]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:26:58 compute-0 podman[89323]: 2025-10-11 04:26:58.941245637 +0000 UTC m=+0.057777061 container create df800f57134dad93681dc64493fdc3a321d8b3a89d3c19b5429131dab03e8ad6 (image=quay.io/ceph/ceph:v18, name=fervent_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:58 compute-0 systemd[1]: Started libpod-conmon-df800f57134dad93681dc64493fdc3a321d8b3a89d3c19b5429131dab03e8ad6.scope.
Oct 11 04:26:59 compute-0 podman[89323]: 2025-10-11 04:26:58.914426529 +0000 UTC m=+0.030957993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:26:59 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29582efc3c59f99bf40e305c73d0324069f06d2918b2f05906eedf4bd4bc4e89/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29582efc3c59f99bf40e305c73d0324069f06d2918b2f05906eedf4bd4bc4e89/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29582efc3c59f99bf40e305c73d0324069f06d2918b2f05906eedf4bd4bc4e89/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:59 compute-0 podman[89323]: 2025-10-11 04:26:59.045936767 +0000 UTC m=+0.162468201 container init df800f57134dad93681dc64493fdc3a321d8b3a89d3c19b5429131dab03e8ad6 (image=quay.io/ceph/ceph:v18, name=fervent_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:26:59 compute-0 podman[89323]: 2025-10-11 04:26:59.054958072 +0000 UTC m=+0.171489526 container start df800f57134dad93681dc64493fdc3a321d8b3a89d3c19b5429131dab03e8ad6 (image=quay.io/ceph/ceph:v18, name=fervent_beaver, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:26:59 compute-0 podman[89323]: 2025-10-11 04:26:59.064830088 +0000 UTC m=+0.181361572 container attach df800f57134dad93681dc64493fdc3a321d8b3a89d3c19b5429131dab03e8ad6 (image=quay.io/ceph/ceph:v18, name=fervent_beaver, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 04:26:59 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate[89280]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 11 04:26:59 compute-0 bash[89264]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 11 04:26:59 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate[89280]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct 11 04:26:59 compute-0 bash[89264]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct 11 04:26:59 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate[89280]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct 11 04:26:59 compute-0 bash[89264]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct 11 04:26:59 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate[89280]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 11 04:26:59 compute-0 bash[89264]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 11 04:26:59 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate[89280]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 11 04:26:59 compute-0 bash[89264]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 11 04:26:59 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate[89280]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 11 04:26:59 compute-0 bash[89264]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 11 04:26:59 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate[89280]: --> ceph-volume raw activate successful for osd ID: 2
Oct 11 04:26:59 compute-0 bash[89264]: --> ceph-volume raw activate successful for osd ID: 2
Oct 11 04:26:59 compute-0 systemd[1]: libpod-7ad900265c2e9c5c9611ba8ba015653eec5ce0c71e09976d0b148e0a3369158e.scope: Deactivated successfully.
Oct 11 04:26:59 compute-0 systemd[1]: libpod-7ad900265c2e9c5c9611ba8ba015653eec5ce0c71e09976d0b148e0a3369158e.scope: Consumed 1.283s CPU time.
Oct 11 04:26:59 compute-0 podman[89454]: 2025-10-11 04:26:59.33172663 +0000 UTC m=+0.037020014 container died 7ad900265c2e9c5c9611ba8ba015653eec5ce0c71e09976d0b148e0a3369158e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b65b3208d2e7c3fbf4ce614481354463aa0d7026bcbfd4d675a247763f08600e-merged.mount: Deactivated successfully.
Oct 11 04:26:59 compute-0 podman[89454]: 2025-10-11 04:26:59.431117557 +0000 UTC m=+0.136410921 container remove 7ad900265c2e9c5c9611ba8ba015653eec5ce0c71e09976d0b148e0a3369158e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:26:59 compute-0 ceph-mgr[74542]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1912859281; not ready for session (expect reconnect)
Oct 11 04:26:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:26:59 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:26:59 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:26:59 compute-0 ceph-mon[74243]: purged_snaps scrub starts
Oct 11 04:26:59 compute-0 ceph-mon[74243]: purged_snaps scrub ok
Oct 11 04:26:59 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:26:59 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:26:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 11 04:26:59 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2659303875' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 04:26:59 compute-0 fervent_beaver[89350]: 
Oct 11 04:26:59 compute-0 fervent_beaver[89350]: {"fsid":"166d0489-2ae7-59eb-961c-c1b5cda4b45a","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":109,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":11,"num_osds":3,"num_up_osds":1,"osd_up_since":1760156815,"num_in_osds":3,"osd_in_since":1760156798,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":446984192,"bytes_avail":21023657984,"bytes_total":21470642176,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-11T04:26:58.031384+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Oct 11 04:26:59 compute-0 podman[89533]: 2025-10-11 04:26:59.680218666 +0000 UTC m=+0.052607682 container create 59da2b9b4ac8997fcfaa062268771a95c1c634cfdcf1217d23118cd3f5a02614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:59 compute-0 systemd[1]: libpod-df800f57134dad93681dc64493fdc3a321d8b3a89d3c19b5429131dab03e8ad6.scope: Deactivated successfully.
Oct 11 04:26:59 compute-0 podman[89323]: 2025-10-11 04:26:59.706626585 +0000 UTC m=+0.823158029 container died df800f57134dad93681dc64493fdc3a321d8b3a89d3c19b5429131dab03e8ad6 (image=quay.io/ceph/ceph:v18, name=fervent_beaver, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/160017c1cc7d82cfb321804a9d48b9dd3e48446ed3c5c8f5edbce730bfdebf8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/160017c1cc7d82cfb321804a9d48b9dd3e48446ed3c5c8f5edbce730bfdebf8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/160017c1cc7d82cfb321804a9d48b9dd3e48446ed3c5c8f5edbce730bfdebf8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/160017c1cc7d82cfb321804a9d48b9dd3e48446ed3c5c8f5edbce730bfdebf8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/160017c1cc7d82cfb321804a9d48b9dd3e48446ed3c5c8f5edbce730bfdebf8d/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:59 compute-0 podman[89533]: 2025-10-11 04:26:59.655008408 +0000 UTC m=+0.027397474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:59 compute-0 podman[89533]: 2025-10-11 04:26:59.779948392 +0000 UTC m=+0.152337428 container init 59da2b9b4ac8997fcfaa062268771a95c1c634cfdcf1217d23118cd3f5a02614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:26:59 compute-0 podman[89533]: 2025-10-11 04:26:59.789517021 +0000 UTC m=+0.161906037 container start 59da2b9b4ac8997fcfaa062268771a95c1c634cfdcf1217d23118cd3f5a02614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-29582efc3c59f99bf40e305c73d0324069f06d2918b2f05906eedf4bd4bc4e89-merged.mount: Deactivated successfully.
Oct 11 04:26:59 compute-0 bash[89533]: 59da2b9b4ac8997fcfaa062268771a95c1c634cfdcf1217d23118cd3f5a02614
Oct 11 04:26:59 compute-0 ceph-osd[89565]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 04:26:59 compute-0 ceph-osd[89565]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 11 04:26:59 compute-0 ceph-osd[89565]: pidfile_write: ignore empty --pid-file
Oct 11 04:26:59 compute-0 systemd[1]: Started Ceph osd.2 for 166d0489-2ae7-59eb-961c-c1b5cda4b45a.
Oct 11 04:26:59 compute-0 ceph-osd[89565]: bdev(0x56328d12b800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 04:26:59 compute-0 ceph-osd[89565]: bdev(0x56328d12b800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 04:26:59 compute-0 ceph-osd[89565]: bdev(0x56328d12b800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:26:59 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:26:59 compute-0 ceph-osd[89565]: bdev(0x56328df63800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 04:26:59 compute-0 ceph-osd[89565]: bdev(0x56328df63800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 04:26:59 compute-0 ceph-osd[89565]: bdev(0x56328df63800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:26:59 compute-0 ceph-osd[89565]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 11 04:26:59 compute-0 ceph-osd[89565]: bdev(0x56328df63800 /var/lib/ceph/osd/ceph-2/block) close
Oct 11 04:26:59 compute-0 sudo[88557]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:26:59 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:59 compute-0 podman[89323]: 2025-10-11 04:26:59.918882125 +0000 UTC m=+1.035413579 container remove df800f57134dad93681dc64493fdc3a321d8b3a89d3c19b5429131dab03e8ad6 (image=quay.io/ceph/ceph:v18, name=fervent_beaver, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:26:59 compute-0 systemd[1]: libpod-conmon-df800f57134dad93681dc64493fdc3a321d8b3a89d3c19b5429131dab03e8ad6.scope: Deactivated successfully.
Oct 11 04:26:59 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:26:59 compute-0 sudo[89310]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v34: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 11 04:27:00 compute-0 sudo[89578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:00 compute-0 sudo[89578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:00 compute-0 sudo[89578]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bdev(0x56328d12b800 /var/lib/ceph/osd/ceph-2/block) close
Oct 11 04:27:00 compute-0 sudo[89603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:00 compute-0 sudo[89603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:00 compute-0 sudo[89603]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:00 compute-0 sudo[89671]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wktvczloygfrznexnkicysdemuggdemn ; /usr/bin/python3'
Oct 11 04:27:00 compute-0 sudo[89671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:00 compute-0 sudo[89636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:00 compute-0 sudo[89636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:00 compute-0 sudo[89636]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:00 compute-0 sudo[89681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:27:00 compute-0 sudo[89681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:00 compute-0 ceph-osd[89565]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Oct 11 04:27:00 compute-0 ceph-osd[89565]: load: jerasure load: lrc 
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bdev(0x56328dff6c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bdev(0x56328dff6c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bdev(0x56328dff6c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bdev(0x56328dff6c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 11 04:27:00 compute-0 python3[89678]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:00 compute-0 podman[89711]: 2025-10-11 04:27:00.508505631 +0000 UTC m=+0.058985011 container create 4c480e75549218cc0e15f2618ed901e9ec52e01ec822aa9ea4df5af2a9ff56f8 (image=quay.io/ceph/ceph:v18, name=laughing_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:27:00 compute-0 systemd[1]: Started libpod-conmon-4c480e75549218cc0e15f2618ed901e9ec52e01ec822aa9ea4df5af2a9ff56f8.scope.
Oct 11 04:27:00 compute-0 ceph-osd[88467]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 30.444 iops: 7793.542 elapsed_sec: 0.385
Oct 11 04:27:00 compute-0 ceph-osd[88467]: log_channel(cluster) log [WRN] : OSD bench result of 7793.541723 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 11 04:27:00 compute-0 ceph-osd[88467]: osd.1 0 waiting for initial osdmap
Oct 11 04:27:00 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1[88463]: 2025-10-11T04:27:00.544+0000 7fa5bafd3640 -1 osd.1 0 waiting for initial osdmap
Oct 11 04:27:00 compute-0 ceph-osd[88467]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 11 04:27:00 compute-0 ceph-osd[88467]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct 11 04:27:00 compute-0 ceph-osd[88467]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 11 04:27:00 compute-0 ceph-osd[88467]: osd.1 11 check_osdmap_features require_osd_release unknown -> reef
Oct 11 04:27:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:00 compute-0 ceph-osd[88467]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 11 04:27:00 compute-0 ceph-osd[88467]: osd.1 11 set_numa_affinity not setting numa affinity
Oct 11 04:27:00 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-1[88463]: 2025-10-11T04:27:00.571+0000 7fa5b65fb640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 11 04:27:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f2f433ef5fef84ec0cb27b04ed538b74a135265bc888fa722e0744ad237db20/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f2f433ef5fef84ec0cb27b04ed538b74a135265bc888fa722e0744ad237db20/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:00 compute-0 ceph-osd[88467]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Oct 11 04:27:00 compute-0 podman[89711]: 2025-10-11 04:27:00.485320114 +0000 UTC m=+0.035799524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:00 compute-0 podman[89711]: 2025-10-11 04:27:00.591213773 +0000 UTC m=+0.141693173 container init 4c480e75549218cc0e15f2618ed901e9ec52e01ec822aa9ea4df5af2a9ff56f8 (image=quay.io/ceph/ceph:v18, name=laughing_ishizaka, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 04:27:00 compute-0 podman[89711]: 2025-10-11 04:27:00.606126125 +0000 UTC m=+0.156605505 container start 4c480e75549218cc0e15f2618ed901e9ec52e01ec822aa9ea4df5af2a9ff56f8 (image=quay.io/ceph/ceph:v18, name=laughing_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:27:00 compute-0 ceph-mgr[74542]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1912859281; not ready for session (expect reconnect)
Oct 11 04:27:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:27:00 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:27:00 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:27:00 compute-0 podman[89711]: 2025-10-11 04:27:00.615137059 +0000 UTC m=+0.165616449 container attach 4c480e75549218cc0e15f2618ed901e9ec52e01ec822aa9ea4df5af2a9ff56f8 (image=quay.io/ceph/ceph:v18, name=laughing_ishizaka, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:00 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2659303875' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 04:27:00 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:00 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:00 compute-0 ceph-mon[74243]: pgmap v34: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 11 04:27:00 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bdev(0x56328dff6c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bdev(0x56328dff6c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bdev(0x56328dff6c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bdev(0x56328dff6c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 11 04:27:00 compute-0 podman[89774]: 2025-10-11 04:27:00.760978724 +0000 UTC m=+0.050167521 container create f59fa346d1157749c5a4e65315f0620e24b86932319473ef959ce44c54d9e4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:00 compute-0 systemd[1]: Started libpod-conmon-f59fa346d1157749c5a4e65315f0620e24b86932319473ef959ce44c54d9e4a3.scope.
Oct 11 04:27:00 compute-0 podman[89774]: 2025-10-11 04:27:00.739774066 +0000 UTC m=+0.028962853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:00 compute-0 podman[89774]: 2025-10-11 04:27:00.854181857 +0000 UTC m=+0.143370654 container init f59fa346d1157749c5a4e65315f0620e24b86932319473ef959ce44c54d9e4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_knuth, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:00 compute-0 podman[89774]: 2025-10-11 04:27:00.859157481 +0000 UTC m=+0.148346258 container start f59fa346d1157749c5a4e65315f0620e24b86932319473ef959ce44c54d9e4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_knuth, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:00 compute-0 podman[89774]: 2025-10-11 04:27:00.863028098 +0000 UTC m=+0.152216885 container attach f59fa346d1157749c5a4e65315f0620e24b86932319473ef959ce44c54d9e4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:27:00 compute-0 objective_knuth[89790]: 167 167
Oct 11 04:27:00 compute-0 systemd[1]: libpod-f59fa346d1157749c5a4e65315f0620e24b86932319473ef959ce44c54d9e4a3.scope: Deactivated successfully.
Oct 11 04:27:00 compute-0 podman[89774]: 2025-10-11 04:27:00.867219292 +0000 UTC m=+0.156408099 container died f59fa346d1157749c5a4e65315f0620e24b86932319473ef959ce44c54d9e4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_knuth, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-08a0ecb383d1c2f5b1cc7a69ee6100dbae2c3fe6ca60be8c0d6d869697d29ce7-merged.mount: Deactivated successfully.
Oct 11 04:27:00 compute-0 podman[89774]: 2025-10-11 04:27:00.921432364 +0000 UTC m=+0.210621171 container remove f59fa346d1157749c5a4e65315f0620e24b86932319473ef959ce44c54d9e4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:27:00 compute-0 systemd[1]: libpod-conmon-f59fa346d1157749c5a4e65315f0620e24b86932319473ef959ce44c54d9e4a3.scope: Deactivated successfully.
Oct 11 04:27:00 compute-0 ceph-osd[89565]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 11 04:27:00 compute-0 ceph-osd[89565]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bdev(0x56328dff6c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bdev(0x56328dff6c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bdev(0x56328dff6c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bdev(0x56328dff7400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bdev(0x56328dff7400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bdev(0x56328dff7400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bluefs mount
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bluefs mount shared_bdev_used = 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: RocksDB version: 7.9.2
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Git sha 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: DB SUMMARY
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: DB Session ID:  037VRKM435FQKLAMFR8S
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: CURRENT file:  CURRENT
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                         Options.error_if_exists: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                       Options.create_if_missing: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                                     Options.env: 0x56328dfb5c70
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                                Options.info_log: 0x56328d1b28a0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                              Options.statistics: (nil)
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                               Options.use_fsync: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                              Options.db_log_dir: 
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                                 Options.wal_dir: db.wal
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                    Options.write_buffer_manager: 0x56328e0d0460
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.unordered_write: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                               Options.row_cache: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                              Options.wal_filter: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.two_write_queues: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.wal_compression: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.atomic_flush: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.max_background_jobs: 4
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.max_background_compactions: -1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.max_subcompactions: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                          Options.max_open_files: -1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Compression algorithms supported:
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         kZSTD supported: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         kXpressCompression supported: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         kBZip2Compression supported: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         kLZ4Compression supported: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         kZlibCompression supported: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         kLZ4HCCompression supported: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         kSnappyCompression supported: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56328d1b22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56328d19f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.compression: LZ4
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.num_levels: 7
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:           Options.merge_operator: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56328d1b22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56328d19f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.compression: LZ4
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.num_levels: 7
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:           Options.merge_operator: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56328d1b22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56328d19f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.compression: LZ4
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.num_levels: 7
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:           Options.merge_operator: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56328d1b22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56328d19f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.compression: LZ4
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.num_levels: 7
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:           Options.merge_operator: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56328d1b22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56328d19f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.compression: LZ4
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.num_levels: 7
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:27:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:           Options.merge_operator: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56328d1b22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56328d19f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.compression: LZ4
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.num_levels: 7
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:27:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:           Options.merge_operator: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56328d1b22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56328d19f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.compression: LZ4
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.num_levels: 7
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:           Options.merge_operator: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56328d1b2240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56328d19f090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.compression: LZ4
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.num_levels: 7
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:27:00 compute-0 ceph-mon[74243]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1912859281,v1:192.168.122.100:6807/1912859281] boot
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:27:00 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:           Options.merge_operator: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56328d1b2240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56328d19f090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.compression: LZ4
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.num_levels: 7
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:27:00 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:27:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:27:00 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:27:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:27:00 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:00 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:27:00 compute-0 ceph-osd[88467]: osd.1 12 state: booting -> active
Oct 11 04:27:00 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=-1 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [1], acting [] -> [1], acting_primary ? -> 1, up_primary ? -> 1, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:27:00 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=-1 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:27:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:           Options.merge_operator: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56328d1b2240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56328d19f090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.compression: LZ4
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.num_levels: 7
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2f251b93-319a-4782-8f94-4f50ae69b400
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156820976922, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156820977094, "job": 1, "event": "recovery_finished"}
Oct 11 04:27:01 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Oct 11 04:27:01 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Oct 11 04:27:01 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 11 04:27:01 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: freelist init
Oct 11 04:27:01 compute-0 ceph-osd[89565]: freelist _read_cfg
Oct 11 04:27:01 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 11 04:27:01 compute-0 ceph-osd[89565]: bluefs umount
Oct 11 04:27:01 compute-0 ceph-osd[89565]: bdev(0x56328dff7400 /var/lib/ceph/osd/ceph-2/block) close
Oct 11 04:27:01 compute-0 podman[90026]: 2025-10-11 04:27:01.158710618 +0000 UTC m=+0.059834803 container create 5c143c6b4e6b730537c01ba1e9dd0865515a4566ed7b1d6de1db284e82a91ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_payne, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:01 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 11 04:27:01 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2569748566' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:27:01 compute-0 podman[90026]: 2025-10-11 04:27:01.125659854 +0000 UTC m=+0.026784089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: bdev(0x56328dff7400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 04:27:01 compute-0 ceph-osd[89565]: bdev(0x56328dff7400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 04:27:01 compute-0 ceph-osd[89565]: bdev(0x56328dff7400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:27:01 compute-0 ceph-osd[89565]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 11 04:27:01 compute-0 ceph-osd[89565]: bluefs mount
Oct 11 04:27:01 compute-0 ceph-osd[89565]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 11 04:27:01 compute-0 systemd[1]: Started libpod-conmon-5c143c6b4e6b730537c01ba1e9dd0865515a4566ed7b1d6de1db284e82a91ae2.scope.
Oct 11 04:27:01 compute-0 ceph-osd[89565]: bluefs mount shared_bdev_used = 4718592
Oct 11 04:27:01 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: RocksDB version: 7.9.2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Git sha 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: DB SUMMARY
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: DB Session ID:  037VRKM435FQKLAMFR8T
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: CURRENT file:  CURRENT
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                         Options.error_if_exists: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                       Options.create_if_missing: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                                     Options.env: 0x56328e178700
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                                Options.info_log: 0x56328dfb16e0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                              Options.statistics: (nil)
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                               Options.use_fsync: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                              Options.db_log_dir: 
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                                 Options.wal_dir: db.wal
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.write_buffer_manager: 0x56328e0d0820
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.unordered_write: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                               Options.row_cache: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                              Options.wal_filter: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.two_write_queues: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.wal_compression: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.atomic_flush: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.max_background_jobs: 4
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.max_background_compactions: -1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.max_subcompactions: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.max_open_files: -1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Compression algorithms supported:
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         kZSTD supported: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         kXpressCompression supported: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         kBZip2Compression supported: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         kLZ4Compression supported: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         kZlibCompression supported: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         kLZ4HCCompression supported: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         kSnappyCompression supported: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56328d1a8dc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56328d19f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.compression: LZ4
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.num_levels: 7
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:           Options.merge_operator: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56328d1a8dc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56328d19f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.compression: LZ4
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.num_levels: 7
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:           Options.merge_operator: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56328d1a8dc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56328d19f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.compression: LZ4
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.num_levels: 7
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:           Options.merge_operator: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56328d1a8dc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56328d19f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.compression: LZ4
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.num_levels: 7
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:           Options.merge_operator: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56328d1a8dc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56328d19f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.compression: LZ4
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.num_levels: 7
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:27:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/254862f5839b842a02fd528cd697ad8f53d043cf711e23af4998352f059bf313/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/254862f5839b842a02fd528cd697ad8f53d043cf711e23af4998352f059bf313/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:27:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/254862f5839b842a02fd528cd697ad8f53d043cf711e23af4998352f059bf313/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:27:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/254862f5839b842a02fd528cd697ad8f53d043cf711e23af4998352f059bf313/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:           Options.merge_operator: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56328d1a8dc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56328d19f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.compression: LZ4
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.num_levels: 7
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:           Options.merge_operator: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56328d1a8dc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56328d19f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.compression: LZ4
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.num_levels: 7
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:           Options.merge_operator: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56328d1a8d80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56328d19f090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.compression: LZ4
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.num_levels: 7
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:27:01 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:           Options.merge_operator: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56328d1a8d80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56328d19f090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.compression: LZ4
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.num_levels: 7
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 podman[90026]: 2025-10-11 04:27:01.290294227 +0000 UTC m=+0.191418412 container init 5c143c6b4e6b730537c01ba1e9dd0865515a4566ed7b1d6de1db284e82a91ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:           Options.merge_operator: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56328d1a8d80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56328d19f090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.compression: LZ4
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.num_levels: 7
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2f251b93-319a-4782-8f94-4f50ae69b400
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156821246871, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156821251916, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156821, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2f251b93-319a-4782-8f94-4f50ae69b400", "db_session_id": "037VRKM435FQKLAMFR8T", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156821254929, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156821, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2f251b93-319a-4782-8f94-4f50ae69b400", "db_session_id": "037VRKM435FQKLAMFR8T", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156821261699, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156821, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2f251b93-319a-4782-8f94-4f50ae69b400", "db_session_id": "037VRKM435FQKLAMFR8T", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156821270837, "job": 1, "event": "recovery_finished"}
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56328d30dc00
Oct 11 04:27:01 compute-0 podman[90026]: 2025-10-11 04:27:01.30124617 +0000 UTC m=+0.202370355 container start 5c143c6b4e6b730537c01ba1e9dd0865515a4566ed7b1d6de1db284e82a91ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_payne, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: DB pointer 0x56328e0b9a00
Oct 11 04:27:01 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 11 04:27:01 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Oct 11 04:27:01 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:27:01 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 04:27:01 compute-0 ceph-osd[89565]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 11 04:27:01 compute-0 ceph-osd[89565]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 11 04:27:01 compute-0 podman[90026]: 2025-10-11 04:27:01.305966478 +0000 UTC m=+0.207090643 container attach 5c143c6b4e6b730537c01ba1e9dd0865515a4566ed7b1d6de1db284e82a91ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:27:01 compute-0 ceph-osd[89565]: _get_class not permitted to load lua
Oct 11 04:27:01 compute-0 ceph-osd[89565]: _get_class not permitted to load sdk
Oct 11 04:27:01 compute-0 ceph-osd[89565]: _get_class not permitted to load test_remote_reads
Oct 11 04:27:01 compute-0 ceph-osd[89565]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 11 04:27:01 compute-0 ceph-osd[89565]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 11 04:27:01 compute-0 ceph-osd[89565]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 11 04:27:01 compute-0 ceph-osd[89565]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 11 04:27:01 compute-0 ceph-osd[89565]: osd.2 0 load_pgs
Oct 11 04:27:01 compute-0 ceph-osd[89565]: osd.2 0 load_pgs opened 0 pgs
Oct 11 04:27:01 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2[89560]: 2025-10-11T04:27:01.311+0000 7f2f596b4740 -1 osd.2 0 log_to_monitors true
Oct 11 04:27:01 compute-0 ceph-osd[89565]: osd.2 0 log_to_monitors true
Oct 11 04:27:01 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Oct 11 04:27:01 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1037659463,v1:192.168.122.100:6811/1037659463]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 11 04:27:01 compute-0 ceph-mon[74243]: OSD bench result of 7793.541723 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 11 04:27:01 compute-0 ceph-mon[74243]: osd.1 [v2:192.168.122.100:6806/1912859281,v1:192.168.122.100:6807/1912859281] boot
Oct 11 04:27:01 compute-0 ceph-mon[74243]: osdmap e12: 3 total, 2 up, 3 in
Oct 11 04:27:01 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:27:01 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:01 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2569748566' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:27:01 compute-0 ceph-mon[74243]: from='osd.2 [v2:192.168.122.100:6810/1037659463,v1:192.168.122.100:6811/1037659463]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 11 04:27:01 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct 11 04:27:02 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2569748566' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:27:02 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1037659463,v1:192.168.122.100:6811/1037659463]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 11 04:27:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Oct 11 04:27:02 compute-0 laughing_ishizaka[89738]: pool 'vms' created
Oct 11 04:27:02 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Oct 11 04:27:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 11 04:27:02 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1037659463,v1:192.168.122.100:6811/1037659463]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 11 04:27:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 11 04:27:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:27:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:02 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:27:02 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v37: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 11 04:27:02 compute-0 systemd[1]: libpod-4c480e75549218cc0e15f2618ed901e9ec52e01ec822aa9ea4df5af2a9ff56f8.scope: Deactivated successfully.
Oct 11 04:27:02 compute-0 podman[89711]: 2025-10-11 04:27:02.036725822 +0000 UTC m=+1.587205232 container died 4c480e75549218cc0e15f2618ed901e9ec52e01ec822aa9ea4df5af2a9ff56f8 (image=quay.io/ceph/ceph:v18, name=laughing_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:27:02 compute-0 ceph-mgr[74542]: [devicehealth INFO root] creating main.db for devicehealth
Oct 11 04:27:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f2f433ef5fef84ec0cb27b04ed538b74a135265bc888fa722e0744ad237db20-merged.mount: Deactivated successfully.
Oct 11 04:27:02 compute-0 podman[89711]: 2025-10-11 04:27:02.091601799 +0000 UTC m=+1.642081179 container remove 4c480e75549218cc0e15f2618ed901e9ec52e01ec822aa9ea4df5af2a9ff56f8 (image=quay.io/ceph/ceph:v18, name=laughing_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 04:27:02 compute-0 systemd[1]: libpod-conmon-4c480e75549218cc0e15f2618ed901e9ec52e01ec822aa9ea4df5af2a9ff56f8.scope: Deactivated successfully.
Oct 11 04:27:02 compute-0 ceph-mgr[74542]: [devicehealth INFO root] Check health
Oct 11 04:27:02 compute-0 sudo[89671]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:02 compute-0 ceph-mgr[74542]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Oct 11 04:27:02 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 11 04:27:02 compute-0 sudo[90306]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Oct 11 04:27:02 compute-0 sudo[90306]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 11 04:27:02 compute-0 sudo[90306]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Oct 11 04:27:02 compute-0 sudo[90306]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:02 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 11 04:27:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 11 04:27:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 11 04:27:02 compute-0 sudo[90344]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqdvsphhjmpvznlaeyynjptxrwjiwkjr ; /usr/bin/python3'
Oct 11 04:27:02 compute-0 sudo[90344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:02 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 11 04:27:02 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 11 04:27:02 compute-0 eager_payne[90093]: {
Oct 11 04:27:02 compute-0 eager_payne[90093]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:27:02 compute-0 eager_payne[90093]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:02 compute-0 eager_payne[90093]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:27:02 compute-0 eager_payne[90093]:         "osd_id": 1,
Oct 11 04:27:02 compute-0 eager_payne[90093]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:27:02 compute-0 eager_payne[90093]:         "type": "bluestore"
Oct 11 04:27:02 compute-0 eager_payne[90093]:     },
Oct 11 04:27:02 compute-0 eager_payne[90093]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:27:02 compute-0 eager_payne[90093]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:02 compute-0 eager_payne[90093]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:27:02 compute-0 eager_payne[90093]:         "osd_id": 0,
Oct 11 04:27:02 compute-0 eager_payne[90093]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:27:02 compute-0 eager_payne[90093]:         "type": "bluestore"
Oct 11 04:27:02 compute-0 eager_payne[90093]:     },
Oct 11 04:27:02 compute-0 eager_payne[90093]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:27:02 compute-0 eager_payne[90093]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:02 compute-0 eager_payne[90093]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:27:02 compute-0 eager_payne[90093]:         "osd_id": 2,
Oct 11 04:27:02 compute-0 eager_payne[90093]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:27:02 compute-0 eager_payne[90093]:         "type": "bluestore"
Oct 11 04:27:02 compute-0 eager_payne[90093]:     }
Oct 11 04:27:02 compute-0 eager_payne[90093]: }
Oct 11 04:27:02 compute-0 systemd[1]: libpod-5c143c6b4e6b730537c01ba1e9dd0865515a4566ed7b1d6de1db284e82a91ae2.scope: Deactivated successfully.
Oct 11 04:27:02 compute-0 systemd[1]: libpod-5c143c6b4e6b730537c01ba1e9dd0865515a4566ed7b1d6de1db284e82a91ae2.scope: Consumed 1.073s CPU time.
Oct 11 04:27:02 compute-0 podman[90026]: 2025-10-11 04:27:02.369216639 +0000 UTC m=+1.270340854 container died 5c143c6b4e6b730537c01ba1e9dd0865515a4566ed7b1d6de1db284e82a91ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_payne, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-254862f5839b842a02fd528cd697ad8f53d043cf711e23af4998352f059bf313-merged.mount: Deactivated successfully.
Oct 11 04:27:02 compute-0 podman[90026]: 2025-10-11 04:27:02.459009707 +0000 UTC m=+1.360133862 container remove 5c143c6b4e6b730537c01ba1e9dd0865515a4566ed7b1d6de1db284e82a91ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:27:02 compute-0 systemd[1]: libpod-conmon-5c143c6b4e6b730537c01ba1e9dd0865515a4566ed7b1d6de1db284e82a91ae2.scope: Deactivated successfully.
Oct 11 04:27:02 compute-0 python3[90348]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:02 compute-0 sudo[89681]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:27:02 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:27:02 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:02 compute-0 podman[90362]: 2025-10-11 04:27:02.596866423 +0000 UTC m=+0.063946605 container create 7423e49ec3f3091640c46f18c84171a62e9d5bf16447fc2d1ea433095c6559cc (image=quay.io/ceph/ceph:v18, name=dazzling_noyce, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:02 compute-0 systemd[1]: Started libpod-conmon-7423e49ec3f3091640c46f18c84171a62e9d5bf16447fc2d1ea433095c6559cc.scope.
Oct 11 04:27:02 compute-0 sudo[90372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:02 compute-0 sudo[90372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:02 compute-0 sudo[90372]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:02 compute-0 podman[90362]: 2025-10-11 04:27:02.574141976 +0000 UTC m=+0.041222168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75b03ffceb390d6a08a7e6d22732fc110b101548cedfff2d7e241be13d9ff4fc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75b03ffceb390d6a08a7e6d22732fc110b101548cedfff2d7e241be13d9ff4fc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:02 compute-0 podman[90362]: 2025-10-11 04:27:02.696134947 +0000 UTC m=+0.163215129 container init 7423e49ec3f3091640c46f18c84171a62e9d5bf16447fc2d1ea433095c6559cc (image=quay.io/ceph/ceph:v18, name=dazzling_noyce, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:27:02 compute-0 podman[90362]: 2025-10-11 04:27:02.709316916 +0000 UTC m=+0.176397098 container start 7423e49ec3f3091640c46f18c84171a62e9d5bf16447fc2d1ea433095c6559cc (image=quay.io/ceph/ceph:v18, name=dazzling_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:27:02 compute-0 podman[90362]: 2025-10-11 04:27:02.71391765 +0000 UTC m=+0.180997832 container attach 7423e49ec3f3091640c46f18c84171a62e9d5bf16447fc2d1ea433095c6559cc (image=quay.io/ceph/ceph:v18, name=dazzling_noyce, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:27:02 compute-0 sudo[90405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:27:02 compute-0 sudo[90405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:02 compute-0 sudo[90405]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:02 compute-0 sudo[90431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:02 compute-0 sudo[90431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:02 compute-0 sudo[90431]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:02 compute-0 sudo[90456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:02 compute-0 sudo[90456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:02 compute-0 sudo[90456]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:02 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 13 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct 11 04:27:03 compute-0 ceph-mon[74243]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 04:27:03 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1037659463,v1:192.168.122.100:6811/1037659463]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 11 04:27:03 compute-0 ceph-osd[89565]: osd.2 0 done with init, starting boot process
Oct 11 04:27:03 compute-0 ceph-osd[89565]: osd.2 0 start_boot
Oct 11 04:27:03 compute-0 ceph-osd[89565]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 11 04:27:03 compute-0 ceph-osd[89565]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 11 04:27:03 compute-0 ceph-osd[89565]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 11 04:27:03 compute-0 ceph-osd[89565]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 11 04:27:03 compute-0 ceph-osd[89565]: osd.2 0  bench count 12288000 bsize 4 KiB
Oct 11 04:27:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Oct 11 04:27:03 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Oct 11 04:27:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:27:03 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:03 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:27:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 14 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=14) [] r=-1 lpr=14 pi=[13,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:27:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 14 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=14) [] r=-1 lpr=14 pi=[13,14)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:27:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2569748566' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:27:03 compute-0 ceph-mon[74243]: from='osd.2 [v2:192.168.122.100:6810/1037659463,v1:192.168.122.100:6811/1037659463]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 11 04:27:03 compute-0 ceph-mon[74243]: osdmap e13: 3 total, 2 up, 3 in
Oct 11 04:27:03 compute-0 ceph-mon[74243]: from='osd.2 [v2:192.168.122.100:6810/1037659463,v1:192.168.122.100:6811/1037659463]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 11 04:27:03 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:03 compute-0 ceph-mon[74243]: pgmap v37: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 11 04:27:03 compute-0 ceph-mon[74243]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 11 04:27:03 compute-0 ceph-mon[74243]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 11 04:27:03 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 11 04:27:03 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:03 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:03 compute-0 ceph-mgr[74542]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1037659463; not ready for session (expect reconnect)
Oct 11 04:27:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:27:03 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:27:03 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:03 compute-0 sudo[90481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:03 compute-0 sudo[90481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:03 compute-0 sudo[90481]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:03 compute-0 sudo[90525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 04:27:03 compute-0 sudo[90525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 11 04:27:03 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2307161255' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:27:03 compute-0 podman[90623]: 2025-10-11 04:27:03.709176637 +0000 UTC m=+0.086774984 container exec 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:03 compute-0 podman[90623]: 2025-10-11 04:27:03.84369975 +0000 UTC m=+0.221298067 container exec_died 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:04 compute-0 ceph-mgr[74542]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1037659463; not ready for session (expect reconnect)
Oct 11 04:27:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct 11 04:27:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:27:04 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:04 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:27:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v39: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 11 04:27:04 compute-0 ceph-mon[74243]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 04:27:04 compute-0 ceph-mon[74243]: from='osd.2 [v2:192.168.122.100:6810/1037659463,v1:192.168.122.100:6811/1037659463]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 11 04:27:04 compute-0 ceph-mon[74243]: osdmap e14: 3 total, 2 up, 3 in
Oct 11 04:27:04 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:04 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:04 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2307161255' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:27:04 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2307161255' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:27:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Oct 11 04:27:04 compute-0 dazzling_noyce[90401]: pool 'volumes' created
Oct 11 04:27:04 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Oct 11 04:27:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:27:04 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:04 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:27:04 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 15 pg[3.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:04 compute-0 podman[90362]: 2025-10-11 04:27:04.099638499 +0000 UTC m=+1.566718651 container died 7423e49ec3f3091640c46f18c84171a62e9d5bf16447fc2d1ea433095c6559cc (image=quay.io/ceph/ceph:v18, name=dazzling_noyce, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:27:04 compute-0 systemd[1]: libpod-7423e49ec3f3091640c46f18c84171a62e9d5bf16447fc2d1ea433095c6559cc.scope: Deactivated successfully.
Oct 11 04:27:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-75b03ffceb390d6a08a7e6d22732fc110b101548cedfff2d7e241be13d9ff4fc-merged.mount: Deactivated successfully.
Oct 11 04:27:04 compute-0 podman[90362]: 2025-10-11 04:27:04.177473169 +0000 UTC m=+1.644553321 container remove 7423e49ec3f3091640c46f18c84171a62e9d5bf16447fc2d1ea433095c6559cc (image=quay.io/ceph/ceph:v18, name=dazzling_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:04 compute-0 systemd[1]: libpod-conmon-7423e49ec3f3091640c46f18c84171a62e9d5bf16447fc2d1ea433095c6559cc.scope: Deactivated successfully.
Oct 11 04:27:04 compute-0 sudo[90344]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:04 compute-0 sudo[90762]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weotkwkfokqovywjkmdjchotcovcpkvl ; /usr/bin/python3'
Oct 11 04:27:04 compute-0 sudo[90762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:04 compute-0 sudo[90525]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:27:04 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:27:04 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:04 compute-0 python3[90766]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:04 compute-0 sudo[90779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:04 compute-0 sudo[90779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:04 compute-0 sudo[90779]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:04 compute-0 podman[90797]: 2025-10-11 04:27:04.717646233 +0000 UTC m=+0.071198445 container create 8c61cce99ea927495aedfd42b59006c1512f1d23db6b2424dee8d3de57b99547 (image=quay.io/ceph/ceph:v18, name=tender_shirley, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:27:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:04 compute-0 systemd[1]: Started libpod-conmon-8c61cce99ea927495aedfd42b59006c1512f1d23db6b2424dee8d3de57b99547.scope.
Oct 11 04:27:04 compute-0 sudo[90815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:04 compute-0 sudo[90815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:04 compute-0 podman[90797]: 2025-10-11 04:27:04.688919767 +0000 UTC m=+0.042472029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:04 compute-0 sudo[90815]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5fd9643c556fed5c157f535a8a145f2a961caba77ed0832bf049be7ed0557b9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5fd9643c556fed5c157f535a8a145f2a961caba77ed0832bf049be7ed0557b9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:04 compute-0 podman[90797]: 2025-10-11 04:27:04.832024844 +0000 UTC m=+0.185577146 container init 8c61cce99ea927495aedfd42b59006c1512f1d23db6b2424dee8d3de57b99547 (image=quay.io/ceph/ceph:v18, name=tender_shirley, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:04 compute-0 podman[90797]: 2025-10-11 04:27:04.839972022 +0000 UTC m=+0.193524234 container start 8c61cce99ea927495aedfd42b59006c1512f1d23db6b2424dee8d3de57b99547 (image=quay.io/ceph/ceph:v18, name=tender_shirley, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 04:27:04 compute-0 podman[90797]: 2025-10-11 04:27:04.850592427 +0000 UTC m=+0.204144689 container attach 8c61cce99ea927495aedfd42b59006c1512f1d23db6b2424dee8d3de57b99547 (image=quay.io/ceph/ceph:v18, name=tender_shirley, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:04 compute-0 sudo[90846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:04 compute-0 sudo[90846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:04 compute-0 sudo[90846]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:04 compute-0 sudo[90872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- inventory --format=json-pretty --filter-for-batch
Oct 11 04:27:04 compute-0 sudo[90872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:05 compute-0 ceph-mgr[74542]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1037659463; not ready for session (expect reconnect)
Oct 11 04:27:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:27:05 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:05 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:27:05 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.phooxi(active, since 69s)
Oct 11 04:27:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct 11 04:27:05 compute-0 ceph-mon[74243]: purged_snaps scrub starts
Oct 11 04:27:05 compute-0 ceph-mon[74243]: purged_snaps scrub ok
Oct 11 04:27:05 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:05 compute-0 ceph-mon[74243]: pgmap v39: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 11 04:27:05 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2307161255' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:27:05 compute-0 ceph-mon[74243]: osdmap e15: 3 total, 2 up, 3 in
Oct 11 04:27:05 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:05 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:05 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:05 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Oct 11 04:27:05 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Oct 11 04:27:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:27:05 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:05 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:27:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 16 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 11 04:27:05 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2993012384' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:27:05 compute-0 podman[90958]: 2025-10-11 04:27:05.445844104 +0000 UTC m=+0.070544160 container create 861a4125452bbae8e5d77ecf71836471325093ca37f1f971f2e1a5658e28ca77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_benz, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:05 compute-0 systemd[1]: Started libpod-conmon-861a4125452bbae8e5d77ecf71836471325093ca37f1f971f2e1a5658e28ca77.scope.
Oct 11 04:27:05 compute-0 podman[90958]: 2025-10-11 04:27:05.412852881 +0000 UTC m=+0.037553017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:05 compute-0 podman[90958]: 2025-10-11 04:27:05.560107312 +0000 UTC m=+0.184807458 container init 861a4125452bbae8e5d77ecf71836471325093ca37f1f971f2e1a5658e28ca77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_benz, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:05 compute-0 podman[90958]: 2025-10-11 04:27:05.567381373 +0000 UTC m=+0.192081439 container start 861a4125452bbae8e5d77ecf71836471325093ca37f1f971f2e1a5658e28ca77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:27:05 compute-0 tender_benz[90978]: 167 167
Oct 11 04:27:05 compute-0 systemd[1]: libpod-861a4125452bbae8e5d77ecf71836471325093ca37f1f971f2e1a5658e28ca77.scope: Deactivated successfully.
Oct 11 04:27:05 compute-0 podman[90958]: 2025-10-11 04:27:05.588899189 +0000 UTC m=+0.213599335 container attach 861a4125452bbae8e5d77ecf71836471325093ca37f1f971f2e1a5658e28ca77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:27:05 compute-0 podman[90958]: 2025-10-11 04:27:05.589265538 +0000 UTC m=+0.213965634 container died 861a4125452bbae8e5d77ecf71836471325093ca37f1f971f2e1a5658e28ca77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 04:27:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-5061b1283fe81af2316ebb074a4f00171908053d0cd03e7171f173e183421b11-merged.mount: Deactivated successfully.
Oct 11 04:27:05 compute-0 podman[90958]: 2025-10-11 04:27:05.674572464 +0000 UTC m=+0.299272560 container remove 861a4125452bbae8e5d77ecf71836471325093ca37f1f971f2e1a5658e28ca77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_benz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:27:05 compute-0 systemd[1]: libpod-conmon-861a4125452bbae8e5d77ecf71836471325093ca37f1f971f2e1a5658e28ca77.scope: Deactivated successfully.
Oct 11 04:27:05 compute-0 podman[91002]: 2025-10-11 04:27:05.870170959 +0000 UTC m=+0.047260309 container create acc9b04a41c42e2d99838cdb2f42f1dac43fb79c5d6d69f53006e2d0402469b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_borg, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:05 compute-0 systemd[1]: Started libpod-conmon-acc9b04a41c42e2d99838cdb2f42f1dac43fb79c5d6d69f53006e2d0402469b8.scope.
Oct 11 04:27:05 compute-0 podman[91002]: 2025-10-11 04:27:05.845400972 +0000 UTC m=+0.022490342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4658321ac7d223ae486f07f70115910fd68f4729f26117e746f164fb5678e77b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4658321ac7d223ae486f07f70115910fd68f4729f26117e746f164fb5678e77b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4658321ac7d223ae486f07f70115910fd68f4729f26117e746f164fb5678e77b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4658321ac7d223ae486f07f70115910fd68f4729f26117e746f164fb5678e77b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:05 compute-0 podman[91002]: 2025-10-11 04:27:05.978773676 +0000 UTC m=+0.155863076 container init acc9b04a41c42e2d99838cdb2f42f1dac43fb79c5d6d69f53006e2d0402469b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_borg, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:05 compute-0 podman[91002]: 2025-10-11 04:27:05.993052132 +0000 UTC m=+0.170141502 container start acc9b04a41c42e2d99838cdb2f42f1dac43fb79c5d6d69f53006e2d0402469b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_borg, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:06 compute-0 podman[91002]: 2025-10-11 04:27:06.00141752 +0000 UTC m=+0.178506960 container attach acc9b04a41c42e2d99838cdb2f42f1dac43fb79c5d6d69f53006e2d0402469b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:06 compute-0 ceph-mgr[74542]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1037659463; not ready for session (expect reconnect)
Oct 11 04:27:06 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:27:06 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:06 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:27:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v42: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 11 04:27:06 compute-0 ceph-mon[74243]: mgrmap e9: compute-0.phooxi(active, since 69s)
Oct 11 04:27:06 compute-0 ceph-mon[74243]: osdmap e16: 3 total, 2 up, 3 in
Oct 11 04:27:06 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:06 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2993012384' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:27:06 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:06 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct 11 04:27:06 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2993012384' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:27:06 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Oct 11 04:27:06 compute-0 tender_shirley[90842]: pool 'backups' created
Oct 11 04:27:06 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Oct 11 04:27:06 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:27:06 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:06 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 17 pg[4.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:06 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:27:06 compute-0 ceph-osd[89565]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 27.901 iops: 7142.671 elapsed_sec: 0.420
Oct 11 04:27:06 compute-0 ceph-osd[89565]: log_channel(cluster) log [WRN] : OSD bench result of 7142.670753 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 11 04:27:06 compute-0 ceph-osd[89565]: osd.2 0 waiting for initial osdmap
Oct 11 04:27:06 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2[89560]: 2025-10-11T04:27:06.148+0000 7f2f55634640 -1 osd.2 0 waiting for initial osdmap
Oct 11 04:27:06 compute-0 systemd[1]: libpod-8c61cce99ea927495aedfd42b59006c1512f1d23db6b2424dee8d3de57b99547.scope: Deactivated successfully.
Oct 11 04:27:06 compute-0 podman[90797]: 2025-10-11 04:27:06.158688621 +0000 UTC m=+1.512240853 container died 8c61cce99ea927495aedfd42b59006c1512f1d23db6b2424dee8d3de57b99547 (image=quay.io/ceph/ceph:v18, name=tender_shirley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:27:06 compute-0 ceph-osd[89565]: osd.2 17 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 11 04:27:06 compute-0 ceph-osd[89565]: osd.2 17 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct 11 04:27:06 compute-0 ceph-osd[89565]: osd.2 17 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 11 04:27:06 compute-0 ceph-osd[89565]: osd.2 17 check_osdmap_features require_osd_release unknown -> reef
Oct 11 04:27:06 compute-0 ceph-osd[89565]: osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 11 04:27:06 compute-0 ceph-osd[89565]: osd.2 17 set_numa_affinity not setting numa affinity
Oct 11 04:27:06 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-osd-2[89560]: 2025-10-11T04:27:06.181+0000 7f2f50c5c640 -1 osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 11 04:27:06 compute-0 ceph-osd[89565]: osd.2 17 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Oct 11 04:27:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5fd9643c556fed5c157f535a8a145f2a961caba77ed0832bf049be7ed0557b9-merged.mount: Deactivated successfully.
Oct 11 04:27:06 compute-0 podman[90797]: 2025-10-11 04:27:06.210632565 +0000 UTC m=+1.564184817 container remove 8c61cce99ea927495aedfd42b59006c1512f1d23db6b2424dee8d3de57b99547 (image=quay.io/ceph/ceph:v18, name=tender_shirley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:27:06 compute-0 systemd[1]: libpod-conmon-8c61cce99ea927495aedfd42b59006c1512f1d23db6b2424dee8d3de57b99547.scope: Deactivated successfully.
Oct 11 04:27:06 compute-0 sudo[90762]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:06 compute-0 sudo[91062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hohuhzpbhwhqjguzyopisqvpjzklybnj ; /usr/bin/python3'
Oct 11 04:27:06 compute-0 sudo[91062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:06 compute-0 python3[91064]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:06 compute-0 podman[91065]: 2025-10-11 04:27:06.626505211 +0000 UTC m=+0.047770922 container create aeae9e2753f109b6a87f8919ef80edaee0239c4abac48bf2d67132884960e922 (image=quay.io/ceph/ceph:v18, name=trusting_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:27:06 compute-0 systemd[1]: Started libpod-conmon-aeae9e2753f109b6a87f8919ef80edaee0239c4abac48bf2d67132884960e922.scope.
Oct 11 04:27:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:06 compute-0 podman[91065]: 2025-10-11 04:27:06.60399175 +0000 UTC m=+0.025257491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def6e07f6bc3b99b1ea0a63253c6aef2610ff4dc708852c7e7c3338871bf0ebf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def6e07f6bc3b99b1ea0a63253c6aef2610ff4dc708852c7e7c3338871bf0ebf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:06 compute-0 podman[91065]: 2025-10-11 04:27:06.728024391 +0000 UTC m=+0.149290172 container init aeae9e2753f109b6a87f8919ef80edaee0239c4abac48bf2d67132884960e922 (image=quay.io/ceph/ceph:v18, name=trusting_liskov, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:27:06 compute-0 podman[91065]: 2025-10-11 04:27:06.740877652 +0000 UTC m=+0.162143383 container start aeae9e2753f109b6a87f8919ef80edaee0239c4abac48bf2d67132884960e922 (image=quay.io/ceph/ceph:v18, name=trusting_liskov, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:27:06 compute-0 podman[91065]: 2025-10-11 04:27:06.745866606 +0000 UTC m=+0.167132337 container attach aeae9e2753f109b6a87f8919ef80edaee0239c4abac48bf2d67132884960e922 (image=quay.io/ceph/ceph:v18, name=trusting_liskov, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:07 compute-0 ceph-mgr[74542]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1037659463; not ready for session (expect reconnect)
Oct 11 04:27:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:27:07 compute-0 ceph-mgr[74542]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:27:07 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct 11 04:27:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Oct 11 04:27:07 compute-0 ceph-mon[74243]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/1037659463,v1:192.168.122.100:6811/1037659463] boot
Oct 11 04:27:07 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Oct 11 04:27:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:27:07 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:07 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=-1 lpr=18 pi=[13,18)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:27:07 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=-1 lpr=18 pi=[13,18)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:27:07 compute-0 ceph-mon[74243]: pgmap v42: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 11 04:27:07 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2993012384' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:27:07 compute-0 ceph-mon[74243]: osdmap e17: 3 total, 2 up, 3 in
Oct 11 04:27:07 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:07 compute-0 ceph-mon[74243]: OSD bench result of 7142.670753 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 11 04:27:07 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:07 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:07 compute-0 ceph-osd[89565]: osd.2 18 state: booting -> active
Oct 11 04:27:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 pi=[13,18)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 11 04:27:07 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4051180746' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:27:07 compute-0 sweet_borg[91019]: [
Oct 11 04:27:07 compute-0 sweet_borg[91019]:     {
Oct 11 04:27:07 compute-0 sweet_borg[91019]:         "available": false,
Oct 11 04:27:07 compute-0 sweet_borg[91019]:         "ceph_device": false,
Oct 11 04:27:07 compute-0 sweet_borg[91019]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:         "lsm_data": {},
Oct 11 04:27:07 compute-0 sweet_borg[91019]:         "lvs": [],
Oct 11 04:27:07 compute-0 sweet_borg[91019]:         "path": "/dev/sr0",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:         "rejected_reasons": [
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "Insufficient space (<5GB)",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "Has a FileSystem"
Oct 11 04:27:07 compute-0 sweet_borg[91019]:         ],
Oct 11 04:27:07 compute-0 sweet_borg[91019]:         "sys_api": {
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "actuators": null,
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "device_nodes": "sr0",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "devname": "sr0",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "human_readable_size": "482.00 KB",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "id_bus": "ata",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "model": "QEMU DVD-ROM",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "nr_requests": "2",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "parent": "/dev/sr0",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "partitions": {},
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "path": "/dev/sr0",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "removable": "1",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "rev": "2.5+",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "ro": "0",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "rotational": "0",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "sas_address": "",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "sas_device_handle": "",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "scheduler_mode": "mq-deadline",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "sectors": 0,
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "sectorsize": "2048",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "size": 493568.0,
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "support_discard": "2048",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "type": "disk",
Oct 11 04:27:07 compute-0 sweet_borg[91019]:             "vendor": "QEMU"
Oct 11 04:27:07 compute-0 sweet_borg[91019]:         }
Oct 11 04:27:07 compute-0 sweet_borg[91019]:     }
Oct 11 04:27:07 compute-0 sweet_borg[91019]: ]
Oct 11 04:27:07 compute-0 systemd[1]: libpod-acc9b04a41c42e2d99838cdb2f42f1dac43fb79c5d6d69f53006e2d0402469b8.scope: Deactivated successfully.
Oct 11 04:27:07 compute-0 podman[91002]: 2025-10-11 04:27:07.535640111 +0000 UTC m=+1.712729471 container died acc9b04a41c42e2d99838cdb2f42f1dac43fb79c5d6d69f53006e2d0402469b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_borg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:27:07 compute-0 systemd[1]: libpod-acc9b04a41c42e2d99838cdb2f42f1dac43fb79c5d6d69f53006e2d0402469b8.scope: Consumed 1.507s CPU time.
Oct 11 04:27:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-4658321ac7d223ae486f07f70115910fd68f4729f26117e746f164fb5678e77b-merged.mount: Deactivated successfully.
Oct 11 04:27:07 compute-0 podman[91002]: 2025-10-11 04:27:07.593610706 +0000 UTC m=+1.770700066 container remove acc9b04a41c42e2d99838cdb2f42f1dac43fb79c5d6d69f53006e2d0402469b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:07 compute-0 systemd[1]: libpod-conmon-acc9b04a41c42e2d99838cdb2f42f1dac43fb79c5d6d69f53006e2d0402469b8.scope: Deactivated successfully.
Oct 11 04:27:07 compute-0 sudo[90872]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:27:07 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:27:07 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Oct 11 04:27:07 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:27:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Oct 11 04:27:07 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 11 04:27:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Oct 11 04:27:07 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 11 04:27:07 compute-0 ceph-mgr[74542]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43696k
Oct 11 04:27:07 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43696k
Oct 11 04:27:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Oct 11 04:27:07 compute-0 ceph-mgr[74542]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44744977: error parsing value: Value '44744977' is below minimum 939524096
Oct 11 04:27:07 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44744977: error parsing value: Value '44744977' is below minimum 939524096
Oct 11 04:27:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:27:07 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:27:07 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:27:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:27:07 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:07 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev ebc514eb-40a1-4908-9c30-bc53db869831 does not exist
Oct 11 04:27:07 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 47a4faed-9f3b-4808-bda1-5614b0c6ebcf does not exist
Oct 11 04:27:07 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev a06c51ce-bdca-4999-9e39-1a8d13c4446f does not exist
Oct 11 04:27:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:27:07 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:27:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:27:07 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:27:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:27:07 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:07 compute-0 sudo[92743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:07 compute-0 sudo[92743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:07 compute-0 sudo[92743]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:07 compute-0 sudo[92768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:07 compute-0 sudo[92768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:07 compute-0 sudo[92768]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:07 compute-0 sudo[92793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:07 compute-0 sudo[92793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:07 compute-0 sudo[92793]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:08 compute-0 sudo[92818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:27:08 compute-0 sudo[92818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v45: 4 pgs: 2 creating+peering, 2 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:08 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct 11 04:27:08 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4051180746' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:27:08 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Oct 11 04:27:08 compute-0 trusting_liskov[91080]: pool 'images' created
Oct 11 04:27:08 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Oct 11 04:27:08 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 19 pg[5.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:08 compute-0 ceph-mon[74243]: osd.2 [v2:192.168.122.100:6810/1037659463,v1:192.168.122.100:6811/1037659463] boot
Oct 11 04:27:08 compute-0 ceph-mon[74243]: osdmap e18: 3 total, 3 up, 3 in
Oct 11 04:27:08 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:27:08 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4051180746' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:27:08 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:08 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:08 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:27:08 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 11 04:27:08 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 11 04:27:08 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:08 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:27:08 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:08 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:27:08 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:27:08 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:08 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=18/19 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 pi=[13,18)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:08 compute-0 systemd[1]: libpod-aeae9e2753f109b6a87f8919ef80edaee0239c4abac48bf2d67132884960e922.scope: Deactivated successfully.
Oct 11 04:27:08 compute-0 podman[91065]: 2025-10-11 04:27:08.198604195 +0000 UTC m=+1.619869956 container died aeae9e2753f109b6a87f8919ef80edaee0239c4abac48bf2d67132884960e922 (image=quay.io/ceph/ceph:v18, name=trusting_liskov, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-def6e07f6bc3b99b1ea0a63253c6aef2610ff4dc708852c7e7c3338871bf0ebf-merged.mount: Deactivated successfully.
Oct 11 04:27:08 compute-0 podman[91065]: 2025-10-11 04:27:08.273292067 +0000 UTC m=+1.694557798 container remove aeae9e2753f109b6a87f8919ef80edaee0239c4abac48bf2d67132884960e922 (image=quay.io/ceph/ceph:v18, name=trusting_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 04:27:08 compute-0 systemd[1]: libpod-conmon-aeae9e2753f109b6a87f8919ef80edaee0239c4abac48bf2d67132884960e922.scope: Deactivated successfully.
Oct 11 04:27:08 compute-0 sudo[91062]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:08 compute-0 podman[92895]: 2025-10-11 04:27:08.46075839 +0000 UTC m=+0.056062019 container create 566c0956c625e36988d771ee9ee3b68dff7db2b1fde36aa1e56634cd7c776725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:27:08 compute-0 sudo[92930]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwobfmywlgsgjpmkfyrbyqqisjbgqefu ; /usr/bin/python3'
Oct 11 04:27:08 compute-0 sudo[92930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:08 compute-0 systemd[1]: Started libpod-conmon-566c0956c625e36988d771ee9ee3b68dff7db2b1fde36aa1e56634cd7c776725.scope.
Oct 11 04:27:08 compute-0 podman[92895]: 2025-10-11 04:27:08.432119326 +0000 UTC m=+0.027423005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:08 compute-0 podman[92895]: 2025-10-11 04:27:08.549318587 +0000 UTC m=+0.144622216 container init 566c0956c625e36988d771ee9ee3b68dff7db2b1fde36aa1e56634cd7c776725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:27:08 compute-0 podman[92895]: 2025-10-11 04:27:08.558043934 +0000 UTC m=+0.153347533 container start 566c0956c625e36988d771ee9ee3b68dff7db2b1fde36aa1e56634cd7c776725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kalam, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:08 compute-0 podman[92895]: 2025-10-11 04:27:08.562771922 +0000 UTC m=+0.158075621 container attach 566c0956c625e36988d771ee9ee3b68dff7db2b1fde36aa1e56634cd7c776725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kalam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:27:08 compute-0 eloquent_kalam[92937]: 167 167
Oct 11 04:27:08 compute-0 systemd[1]: libpod-566c0956c625e36988d771ee9ee3b68dff7db2b1fde36aa1e56634cd7c776725.scope: Deactivated successfully.
Oct 11 04:27:08 compute-0 podman[92895]: 2025-10-11 04:27:08.564940326 +0000 UTC m=+0.160243955 container died 566c0956c625e36988d771ee9ee3b68dff7db2b1fde36aa1e56634cd7c776725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:27:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5afcc8a6a7a63f83f12568beddc219c1cfb21deebfc084cd4ff4cc89dceeaab-merged.mount: Deactivated successfully.
Oct 11 04:27:08 compute-0 podman[92895]: 2025-10-11 04:27:08.618203904 +0000 UTC m=+0.213507523 container remove 566c0956c625e36988d771ee9ee3b68dff7db2b1fde36aa1e56634cd7c776725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:08 compute-0 systemd[1]: libpod-conmon-566c0956c625e36988d771ee9ee3b68dff7db2b1fde36aa1e56634cd7c776725.scope: Deactivated successfully.
Oct 11 04:27:08 compute-0 ceph-mon[74243]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 04:27:08 compute-0 python3[92936]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:08 compute-0 podman[92957]: 2025-10-11 04:27:08.767992367 +0000 UTC m=+0.061379360 container create 53ebcbe0d3b59e3ce2b864d74de2a57a44b6fd9e205b8e8ba12bdbfc253cf2da (image=quay.io/ceph/ceph:v18, name=epic_lederberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:27:08 compute-0 systemd[1]: Started libpod-conmon-53ebcbe0d3b59e3ce2b864d74de2a57a44b6fd9e205b8e8ba12bdbfc253cf2da.scope.
Oct 11 04:27:08 compute-0 podman[92957]: 2025-10-11 04:27:08.750952003 +0000 UTC m=+0.044338976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23949e7f38b6ac2d0df2f8de1ab02de4ab2f0c51432d49a98d779c3206e8a4e0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23949e7f38b6ac2d0df2f8de1ab02de4ab2f0c51432d49a98d779c3206e8a4e0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:08 compute-0 podman[92976]: 2025-10-11 04:27:08.88042416 +0000 UTC m=+0.084779744 container create 0a5a8c22e0e67df48792968501c4c036685302e3ed3ed556d82c789048814564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_meninsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:08 compute-0 podman[92957]: 2025-10-11 04:27:08.901064484 +0000 UTC m=+0.194451557 container init 53ebcbe0d3b59e3ce2b864d74de2a57a44b6fd9e205b8e8ba12bdbfc253cf2da (image=quay.io/ceph/ceph:v18, name=epic_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:27:08 compute-0 podman[92957]: 2025-10-11 04:27:08.91211206 +0000 UTC m=+0.205499053 container start 53ebcbe0d3b59e3ce2b864d74de2a57a44b6fd9e205b8e8ba12bdbfc253cf2da (image=quay.io/ceph/ceph:v18, name=epic_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:08 compute-0 podman[92957]: 2025-10-11 04:27:08.916373116 +0000 UTC m=+0.209760179 container attach 53ebcbe0d3b59e3ce2b864d74de2a57a44b6fd9e205b8e8ba12bdbfc253cf2da (image=quay.io/ceph/ceph:v18, name=epic_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:27:08 compute-0 systemd[1]: Started libpod-conmon-0a5a8c22e0e67df48792968501c4c036685302e3ed3ed556d82c789048814564.scope.
Oct 11 04:27:08 compute-0 podman[92976]: 2025-10-11 04:27:08.844112085 +0000 UTC m=+0.048467679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86488cd79c2d2908783be4d60d30bee725dc33bea0279325c88ed415313e0516/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86488cd79c2d2908783be4d60d30bee725dc33bea0279325c88ed415313e0516/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86488cd79c2d2908783be4d60d30bee725dc33bea0279325c88ed415313e0516/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86488cd79c2d2908783be4d60d30bee725dc33bea0279325c88ed415313e0516/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86488cd79c2d2908783be4d60d30bee725dc33bea0279325c88ed415313e0516/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:08 compute-0 podman[92976]: 2025-10-11 04:27:08.987682823 +0000 UTC m=+0.192038467 container init 0a5a8c22e0e67df48792968501c4c036685302e3ed3ed556d82c789048814564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:27:09 compute-0 podman[92976]: 2025-10-11 04:27:09.00038378 +0000 UTC m=+0.204739404 container start 0a5a8c22e0e67df48792968501c4c036685302e3ed3ed556d82c789048814564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_meninsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:09 compute-0 podman[92976]: 2025-10-11 04:27:09.004855431 +0000 UTC m=+0.209211225 container attach 0a5a8c22e0e67df48792968501c4c036685302e3ed3ed556d82c789048814564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct 11 04:27:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Oct 11 04:27:09 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Oct 11 04:27:09 compute-0 ceph-mon[74243]: Adjusting osd_memory_target on compute-0 to 43696k
Oct 11 04:27:09 compute-0 ceph-mon[74243]: Unable to set osd_memory_target on compute-0 to 44744977: error parsing value: Value '44744977' is below minimum 939524096
Oct 11 04:27:09 compute-0 ceph-mon[74243]: pgmap v45: 4 pgs: 2 creating+peering, 2 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:09 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4051180746' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:27:09 compute-0 ceph-mon[74243]: osdmap e19: 3 total, 3 up, 3 in
Oct 11 04:27:09 compute-0 ceph-mon[74243]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 04:27:09 compute-0 ceph-mon[74243]: osdmap e20: 3 total, 3 up, 3 in
Oct 11 04:27:09 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 11 04:27:09 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2664411122' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:27:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v48: 5 pgs: 1 unknown, 2 creating+peering, 2 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:10 compute-0 pedantic_meninsky[92998]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:27:10 compute-0 pedantic_meninsky[92998]: --> relative data size: 1.0
Oct 11 04:27:10 compute-0 pedantic_meninsky[92998]: --> All data devices are unavailable
Oct 11 04:27:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct 11 04:27:10 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2664411122' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:27:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Oct 11 04:27:10 compute-0 epic_lederberg[92989]: pool 'cephfs.cephfs.meta' created
Oct 11 04:27:10 compute-0 systemd[1]: libpod-0a5a8c22e0e67df48792968501c4c036685302e3ed3ed556d82c789048814564.scope: Deactivated successfully.
Oct 11 04:27:10 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Oct 11 04:27:10 compute-0 systemd[1]: libpod-0a5a8c22e0e67df48792968501c4c036685302e3ed3ed556d82c789048814564.scope: Consumed 1.148s CPU time.
Oct 11 04:27:10 compute-0 podman[92976]: 2025-10-11 04:27:10.199125357 +0000 UTC m=+1.403480991 container died 0a5a8c22e0e67df48792968501c4c036685302e3ed3ed556d82c789048814564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:10 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2664411122' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:27:10 compute-0 systemd[1]: libpod-53ebcbe0d3b59e3ce2b864d74de2a57a44b6fd9e205b8e8ba12bdbfc253cf2da.scope: Deactivated successfully.
Oct 11 04:27:10 compute-0 podman[92957]: 2025-10-11 04:27:10.223675199 +0000 UTC m=+1.517062212 container died 53ebcbe0d3b59e3ce2b864d74de2a57a44b6fd9e205b8e8ba12bdbfc253cf2da (image=quay.io/ceph/ceph:v18, name=epic_lederberg, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:27:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-86488cd79c2d2908783be4d60d30bee725dc33bea0279325c88ed415313e0516-merged.mount: Deactivated successfully.
Oct 11 04:27:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-23949e7f38b6ac2d0df2f8de1ab02de4ab2f0c51432d49a98d779c3206e8a4e0-merged.mount: Deactivated successfully.
Oct 11 04:27:10 compute-0 podman[92957]: 2025-10-11 04:27:10.293708175 +0000 UTC m=+1.587095158 container remove 53ebcbe0d3b59e3ce2b864d74de2a57a44b6fd9e205b8e8ba12bdbfc253cf2da (image=quay.io/ceph/ceph:v18, name=epic_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:10 compute-0 podman[92976]: 2025-10-11 04:27:10.302550625 +0000 UTC m=+1.506906209 container remove 0a5a8c22e0e67df48792968501c4c036685302e3ed3ed556d82c789048814564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_meninsky, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:10 compute-0 systemd[1]: libpod-conmon-53ebcbe0d3b59e3ce2b864d74de2a57a44b6fd9e205b8e8ba12bdbfc253cf2da.scope: Deactivated successfully.
Oct 11 04:27:10 compute-0 sudo[92930]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:10 compute-0 systemd[1]: libpod-conmon-0a5a8c22e0e67df48792968501c4c036685302e3ed3ed556d82c789048814564.scope: Deactivated successfully.
Oct 11 04:27:10 compute-0 sudo[92818]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:10 compute-0 sudo[93073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:10 compute-0 sudo[93073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:10 compute-0 sudo[93073]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:10 compute-0 sudo[93120]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fstrgdqxkmftqbprohnlaqovzkcsqprd ; /usr/bin/python3'
Oct 11 04:27:10 compute-0 sudo[93120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:10 compute-0 sudo[93123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:10 compute-0 sudo[93123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:10 compute-0 sudo[93123]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:10 compute-0 python3[93126]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:10 compute-0 sudo[93149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:10 compute-0 sudo[93149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:10 compute-0 sudo[93149]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:10 compute-0 podman[93172]: 2025-10-11 04:27:10.695711455 +0000 UTC m=+0.069512884 container create 55775d67bcea61ba95fa76c4ceb0424a8bb4a7d86c4addf354010a18aab88ca3 (image=quay.io/ceph/ceph:v18, name=crazy_cerf, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:27:10 compute-0 sudo[93180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:27:10 compute-0 sudo[93180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:10 compute-0 systemd[1]: Started libpod-conmon-55775d67bcea61ba95fa76c4ceb0424a8bb4a7d86c4addf354010a18aab88ca3.scope.
Oct 11 04:27:10 compute-0 podman[93172]: 2025-10-11 04:27:10.672107617 +0000 UTC m=+0.045909046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b90f1479475a19039c1c95809ff412c225b13226ca69cfb3929715901e844010/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b90f1479475a19039c1c95809ff412c225b13226ca69cfb3929715901e844010/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:10 compute-0 podman[93172]: 2025-10-11 04:27:10.805091861 +0000 UTC m=+0.178893310 container init 55775d67bcea61ba95fa76c4ceb0424a8bb4a7d86c4addf354010a18aab88ca3 (image=quay.io/ceph/ceph:v18, name=crazy_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:27:10 compute-0 podman[93172]: 2025-10-11 04:27:10.813130622 +0000 UTC m=+0.186932061 container start 55775d67bcea61ba95fa76c4ceb0424a8bb4a7d86c4addf354010a18aab88ca3 (image=quay.io/ceph/ceph:v18, name=crazy_cerf, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:10 compute-0 podman[93172]: 2025-10-11 04:27:10.817384568 +0000 UTC m=+0.191185997 container attach 55775d67bcea61ba95fa76c4ceb0424a8bb4a7d86c4addf354010a18aab88ca3 (image=quay.io/ceph/ceph:v18, name=crazy_cerf, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:11 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:11 compute-0 podman[93255]: 2025-10-11 04:27:11.126174484 +0000 UTC m=+0.056371866 container create ad9888e6340a11d3d7a851e364ba99deb745a7705893e8e9f99167b4873efc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:11 compute-0 systemd[1]: Started libpod-conmon-ad9888e6340a11d3d7a851e364ba99deb745a7705893e8e9f99167b4873efc07.scope.
Oct 11 04:27:11 compute-0 podman[93255]: 2025-10-11 04:27:11.100657588 +0000 UTC m=+0.030855020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct 11 04:27:11 compute-0 ceph-mon[74243]: pgmap v48: 5 pgs: 1 unknown, 2 creating+peering, 2 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:11 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2664411122' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:27:11 compute-0 ceph-mon[74243]: osdmap e21: 3 total, 3 up, 3 in
Oct 11 04:27:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Oct 11 04:27:11 compute-0 podman[93255]: 2025-10-11 04:27:11.224664009 +0000 UTC m=+0.154861451 container init ad9888e6340a11d3d7a851e364ba99deb745a7705893e8e9f99167b4873efc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lederberg, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:11 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Oct 11 04:27:11 compute-0 podman[93255]: 2025-10-11 04:27:11.233870719 +0000 UTC m=+0.164068111 container start ad9888e6340a11d3d7a851e364ba99deb745a7705893e8e9f99167b4873efc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lederberg, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:11 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:11 compute-0 podman[93255]: 2025-10-11 04:27:11.237613062 +0000 UTC m=+0.167810534 container attach ad9888e6340a11d3d7a851e364ba99deb745a7705893e8e9f99167b4873efc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lederberg, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:27:11 compute-0 unruffled_lederberg[93288]: 167 167
Oct 11 04:27:11 compute-0 systemd[1]: libpod-ad9888e6340a11d3d7a851e364ba99deb745a7705893e8e9f99167b4873efc07.scope: Deactivated successfully.
Oct 11 04:27:11 compute-0 conmon[93288]: conmon ad9888e6340a11d3d7a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad9888e6340a11d3d7a851e364ba99deb745a7705893e8e9f99167b4873efc07.scope/container/memory.events
Oct 11 04:27:11 compute-0 podman[93255]: 2025-10-11 04:27:11.242040002 +0000 UTC m=+0.172237424 container died ad9888e6340a11d3d7a851e364ba99deb745a7705893e8e9f99167b4873efc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lederberg, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3924d85630a9132d2ff0892d85141f574519b5edfe35facca804bc529dea14b-merged.mount: Deactivated successfully.
Oct 11 04:27:11 compute-0 podman[93255]: 2025-10-11 04:27:11.292217923 +0000 UTC m=+0.222415345 container remove ad9888e6340a11d3d7a851e364ba99deb745a7705893e8e9f99167b4873efc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:27:11 compute-0 systemd[1]: libpod-conmon-ad9888e6340a11d3d7a851e364ba99deb745a7705893e8e9f99167b4873efc07.scope: Deactivated successfully.
Oct 11 04:27:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 11 04:27:11 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4269110561' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:27:11 compute-0 podman[93317]: 2025-10-11 04:27:11.531758003 +0000 UTC m=+0.073363029 container create dd41d2011f4b2eed9319c1c370b278153df8a99da1b3660a519a49801c2b584d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moore, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:27:11 compute-0 systemd[1]: Started libpod-conmon-dd41d2011f4b2eed9319c1c370b278153df8a99da1b3660a519a49801c2b584d.scope.
Oct 11 04:27:11 compute-0 podman[93317]: 2025-10-11 04:27:11.501388566 +0000 UTC m=+0.042993612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e337712f9a66574ea99039f82688a9594541268b56440ac98e2b7a6fb42f1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e337712f9a66574ea99039f82688a9594541268b56440ac98e2b7a6fb42f1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e337712f9a66574ea99039f82688a9594541268b56440ac98e2b7a6fb42f1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e337712f9a66574ea99039f82688a9594541268b56440ac98e2b7a6fb42f1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:11 compute-0 podman[93317]: 2025-10-11 04:27:11.637694774 +0000 UTC m=+0.179299850 container init dd41d2011f4b2eed9319c1c370b278153df8a99da1b3660a519a49801c2b584d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moore, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:27:11 compute-0 podman[93317]: 2025-10-11 04:27:11.659695662 +0000 UTC m=+0.201300678 container start dd41d2011f4b2eed9319c1c370b278153df8a99da1b3660a519a49801c2b584d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moore, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:11 compute-0 podman[93317]: 2025-10-11 04:27:11.66362352 +0000 UTC m=+0.205228546 container attach dd41d2011f4b2eed9319c1c370b278153df8a99da1b3660a519a49801c2b584d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moore, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v51: 6 pgs: 1 unknown, 1 creating+peering, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct 11 04:27:12 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4269110561' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:27:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Oct 11 04:27:12 compute-0 crazy_cerf[93214]: pool 'cephfs.cephfs.data' created
Oct 11 04:27:12 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Oct 11 04:27:12 compute-0 ceph-mon[74243]: osdmap e22: 3 total, 3 up, 3 in
Oct 11 04:27:12 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4269110561' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:27:12 compute-0 systemd[1]: libpod-55775d67bcea61ba95fa76c4ceb0424a8bb4a7d86c4addf354010a18aab88ca3.scope: Deactivated successfully.
Oct 11 04:27:12 compute-0 podman[93172]: 2025-10-11 04:27:12.249431401 +0000 UTC m=+1.623232800 container died 55775d67bcea61ba95fa76c4ceb0424a8bb4a7d86c4addf354010a18aab88ca3 (image=quay.io/ceph/ceph:v18, name=crazy_cerf, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b90f1479475a19039c1c95809ff412c225b13226ca69cfb3929715901e844010-merged.mount: Deactivated successfully.
Oct 11 04:27:12 compute-0 podman[93172]: 2025-10-11 04:27:12.294261879 +0000 UTC m=+1.668063278 container remove 55775d67bcea61ba95fa76c4ceb0424a8bb4a7d86c4addf354010a18aab88ca3 (image=quay.io/ceph/ceph:v18, name=crazy_cerf, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:27:12 compute-0 systemd[1]: libpod-conmon-55775d67bcea61ba95fa76c4ceb0424a8bb4a7d86c4addf354010a18aab88ca3.scope: Deactivated successfully.
Oct 11 04:27:12 compute-0 sudo[93120]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:12 compute-0 vigorous_moore[93333]: {
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:     "0": [
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:         {
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "devices": [
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "/dev/loop3"
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             ],
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "lv_name": "ceph_lv0",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "lv_size": "21470642176",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "name": "ceph_lv0",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "tags": {
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.cluster_name": "ceph",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.crush_device_class": "",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.encrypted": "0",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.osd_id": "0",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.type": "block",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.vdo": "0"
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             },
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "type": "block",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "vg_name": "ceph_vg0"
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:         }
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:     ],
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:     "1": [
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:         {
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "devices": [
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "/dev/loop4"
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             ],
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "lv_name": "ceph_lv1",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "lv_size": "21470642176",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "name": "ceph_lv1",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "tags": {
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.cluster_name": "ceph",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.crush_device_class": "",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.encrypted": "0",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.osd_id": "1",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.type": "block",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.vdo": "0"
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             },
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "type": "block",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "vg_name": "ceph_vg1"
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:         }
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:     ],
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:     "2": [
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:         {
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "devices": [
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "/dev/loop5"
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             ],
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "lv_name": "ceph_lv2",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "lv_size": "21470642176",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "name": "ceph_lv2",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "tags": {
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.cluster_name": "ceph",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.crush_device_class": "",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.encrypted": "0",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.osd_id": "2",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.type": "block",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:                 "ceph.vdo": "0"
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             },
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "type": "block",
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:             "vg_name": "ceph_vg2"
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:         }
Oct 11 04:27:12 compute-0 vigorous_moore[93333]:     ]
Oct 11 04:27:12 compute-0 vigorous_moore[93333]: }
Oct 11 04:27:12 compute-0 systemd[1]: libpod-dd41d2011f4b2eed9319c1c370b278153df8a99da1b3660a519a49801c2b584d.scope: Deactivated successfully.
Oct 11 04:27:12 compute-0 podman[93317]: 2025-10-11 04:27:12.454273817 +0000 UTC m=+0.995878833 container died dd41d2011f4b2eed9319c1c370b278153df8a99da1b3660a519a49801c2b584d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moore, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1e337712f9a66574ea99039f82688a9594541268b56440ac98e2b7a6fb42f1a-merged.mount: Deactivated successfully.
Oct 11 04:27:12 compute-0 sudo[93391]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqojkoatdtnanotqvtzyxhvwpqolyrpi ; /usr/bin/python3'
Oct 11 04:27:12 compute-0 sudo[93391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:12 compute-0 podman[93317]: 2025-10-11 04:27:12.538935217 +0000 UTC m=+1.080540243 container remove dd41d2011f4b2eed9319c1c370b278153df8a99da1b3660a519a49801c2b584d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moore, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:12 compute-0 systemd[1]: libpod-conmon-dd41d2011f4b2eed9319c1c370b278153df8a99da1b3660a519a49801c2b584d.scope: Deactivated successfully.
Oct 11 04:27:12 compute-0 sudo[93180]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:12 compute-0 sudo[93394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:12 compute-0 sudo[93394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:12 compute-0 python3[93393]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:12 compute-0 sudo[93394]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:12 compute-0 podman[93419]: 2025-10-11 04:27:12.754416798 +0000 UTC m=+0.057411142 container create 6abb7dadc5a93c752db90a65de748fe7287ed9f2c50f23b743b2adced0a8a1bf (image=quay.io/ceph/ceph:v18, name=cranky_raman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:27:12 compute-0 sudo[93420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:12 compute-0 sudo[93420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:12 compute-0 sudo[93420]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:12 compute-0 systemd[1]: Started libpod-conmon-6abb7dadc5a93c752db90a65de748fe7287ed9f2c50f23b743b2adced0a8a1bf.scope.
Oct 11 04:27:12 compute-0 podman[93419]: 2025-10-11 04:27:12.72922415 +0000 UTC m=+0.032218514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2c1ee6ebb3076d641ec9fd6f46d4117205f987634aa446f0f81d4360edb5038/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2c1ee6ebb3076d641ec9fd6f46d4117205f987634aa446f0f81d4360edb5038/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:12 compute-0 sudo[93456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:12 compute-0 sudo[93456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:12 compute-0 sudo[93456]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:12 compute-0 podman[93419]: 2025-10-11 04:27:12.882051318 +0000 UTC m=+0.185045662 container init 6abb7dadc5a93c752db90a65de748fe7287ed9f2c50f23b743b2adced0a8a1bf (image=quay.io/ceph/ceph:v18, name=cranky_raman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:12 compute-0 podman[93419]: 2025-10-11 04:27:12.896458728 +0000 UTC m=+0.199453052 container start 6abb7dadc5a93c752db90a65de748fe7287ed9f2c50f23b743b2adced0a8a1bf (image=quay.io/ceph/ceph:v18, name=cranky_raman, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 04:27:12 compute-0 podman[93419]: 2025-10-11 04:27:12.900034147 +0000 UTC m=+0.203028491 container attach 6abb7dadc5a93c752db90a65de748fe7287ed9f2c50f23b743b2adced0a8a1bf (image=quay.io/ceph/ceph:v18, name=cranky_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:27:12 compute-0 sudo[93486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:27:12 compute-0 sudo[93486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [1] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:13 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct 11 04:27:13 compute-0 ceph-mon[74243]: pgmap v51: 6 pgs: 1 unknown, 1 creating+peering, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:13 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4269110561' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:27:13 compute-0 ceph-mon[74243]: osdmap e23: 3 total, 3 up, 3 in
Oct 11 04:27:13 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Oct 11 04:27:13 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Oct 11 04:27:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [1] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:13 compute-0 podman[93570]: 2025-10-11 04:27:13.300784545 +0000 UTC m=+0.044145451 container create 64260d98ad78ffe81dedf9b0de35b9a0f5d47c84aa304038a7cbcb593033d495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 04:27:13 compute-0 systemd[1]: Started libpod-conmon-64260d98ad78ffe81dedf9b0de35b9a0f5d47c84aa304038a7cbcb593033d495.scope.
Oct 11 04:27:13 compute-0 podman[93570]: 2025-10-11 04:27:13.280946761 +0000 UTC m=+0.024307707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:13 compute-0 podman[93570]: 2025-10-11 04:27:13.396277946 +0000 UTC m=+0.139638942 container init 64260d98ad78ffe81dedf9b0de35b9a0f5d47c84aa304038a7cbcb593033d495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_benz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:13 compute-0 podman[93570]: 2025-10-11 04:27:13.407664239 +0000 UTC m=+0.151025185 container start 64260d98ad78ffe81dedf9b0de35b9a0f5d47c84aa304038a7cbcb593033d495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:13 compute-0 podman[93570]: 2025-10-11 04:27:13.413185757 +0000 UTC m=+0.156546693 container attach 64260d98ad78ffe81dedf9b0de35b9a0f5d47c84aa304038a7cbcb593033d495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:27:13 compute-0 nostalgic_benz[93587]: 167 167
Oct 11 04:27:13 compute-0 systemd[1]: libpod-64260d98ad78ffe81dedf9b0de35b9a0f5d47c84aa304038a7cbcb593033d495.scope: Deactivated successfully.
Oct 11 04:27:13 compute-0 podman[93570]: 2025-10-11 04:27:13.415588077 +0000 UTC m=+0.158949023 container died 64260d98ad78ffe81dedf9b0de35b9a0f5d47c84aa304038a7cbcb593033d495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_benz, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:13 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Oct 11 04:27:13 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1721471883' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 11 04:27:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2810014153d954d33e3a19989b3adf43dff027c2d9421db3adc2ee8e76fcfd5b-merged.mount: Deactivated successfully.
Oct 11 04:27:13 compute-0 podman[93570]: 2025-10-11 04:27:13.470983568 +0000 UTC m=+0.214344514 container remove 64260d98ad78ffe81dedf9b0de35b9a0f5d47c84aa304038a7cbcb593033d495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:27:13 compute-0 systemd[1]: libpod-conmon-64260d98ad78ffe81dedf9b0de35b9a0f5d47c84aa304038a7cbcb593033d495.scope: Deactivated successfully.
Oct 11 04:27:13 compute-0 podman[93611]: 2025-10-11 04:27:13.714400965 +0000 UTC m=+0.064249643 container create 62bd299103694492a2b7556ecfb47453fc3bf465b99995bdf6c69414d79c4550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_feynman, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:13 compute-0 systemd[1]: Started libpod-conmon-62bd299103694492a2b7556ecfb47453fc3bf465b99995bdf6c69414d79c4550.scope.
Oct 11 04:27:13 compute-0 podman[93611]: 2025-10-11 04:27:13.692962631 +0000 UTC m=+0.042811319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8b7f5e54cb6fc38fb519ed79cfc9377377036edfd651ff902a97cfbfcdeb48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8b7f5e54cb6fc38fb519ed79cfc9377377036edfd651ff902a97cfbfcdeb48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8b7f5e54cb6fc38fb519ed79cfc9377377036edfd651ff902a97cfbfcdeb48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8b7f5e54cb6fc38fb519ed79cfc9377377036edfd651ff902a97cfbfcdeb48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:13 compute-0 podman[93611]: 2025-10-11 04:27:13.83978967 +0000 UTC m=+0.189638408 container init 62bd299103694492a2b7556ecfb47453fc3bf465b99995bdf6c69414d79c4550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:27:13 compute-0 podman[93611]: 2025-10-11 04:27:13.857499132 +0000 UTC m=+0.207347790 container start 62bd299103694492a2b7556ecfb47453fc3bf465b99995bdf6c69414d79c4550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_feynman, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:13 compute-0 podman[93611]: 2025-10-11 04:27:13.860852185 +0000 UTC m=+0.210700943 container attach 62bd299103694492a2b7556ecfb47453fc3bf465b99995bdf6c69414d79c4550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v54: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct 11 04:27:14 compute-0 ceph-mon[74243]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 04:27:14 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1721471883' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 11 04:27:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Oct 11 04:27:14 compute-0 cranky_raman[93471]: enabled application 'rbd' on pool 'vms'
Oct 11 04:27:14 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Oct 11 04:27:14 compute-0 ceph-mon[74243]: osdmap e24: 3 total, 3 up, 3 in
Oct 11 04:27:14 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1721471883' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 11 04:27:14 compute-0 podman[93419]: 2025-10-11 04:27:14.289709204 +0000 UTC m=+1.592703518 container died 6abb7dadc5a93c752db90a65de748fe7287ed9f2c50f23b743b2adced0a8a1bf (image=quay.io/ceph/ceph:v18, name=cranky_raman, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:27:14 compute-0 systemd[1]: libpod-6abb7dadc5a93c752db90a65de748fe7287ed9f2c50f23b743b2adced0a8a1bf.scope: Deactivated successfully.
Oct 11 04:27:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2c1ee6ebb3076d641ec9fd6f46d4117205f987634aa446f0f81d4360edb5038-merged.mount: Deactivated successfully.
Oct 11 04:27:14 compute-0 podman[93419]: 2025-10-11 04:27:14.339503606 +0000 UTC m=+1.642497930 container remove 6abb7dadc5a93c752db90a65de748fe7287ed9f2c50f23b743b2adced0a8a1bf (image=quay.io/ceph/ceph:v18, name=cranky_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:27:14 compute-0 systemd[1]: libpod-conmon-6abb7dadc5a93c752db90a65de748fe7287ed9f2c50f23b743b2adced0a8a1bf.scope: Deactivated successfully.
Oct 11 04:27:14 compute-0 sudo[93391]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:14 compute-0 sudo[93671]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnhvlrzkbfdpxcxoqlsbglmufsdjllll ; /usr/bin/python3'
Oct 11 04:27:14 compute-0 sudo[93671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:14 compute-0 python3[93674]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:14 compute-0 podman[93687]: 2025-10-11 04:27:14.847064586 +0000 UTC m=+0.071608305 container create c48ce2bc6a84b83b0f5df7158e83f30266bc97696653f1ec315b86cdb2855bca (image=quay.io/ceph/ceph:v18, name=naughty_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:14 compute-0 systemd[1]: Started libpod-conmon-c48ce2bc6a84b83b0f5df7158e83f30266bc97696653f1ec315b86cdb2855bca.scope.
Oct 11 04:27:14 compute-0 podman[93687]: 2025-10-11 04:27:14.817055468 +0000 UTC m=+0.041599237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:14 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87b550bc8a14b93379ed8b354550653a4a3e8b2a2d3a632392a026dc6d97dcb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87b550bc8a14b93379ed8b354550653a4a3e8b2a2d3a632392a026dc6d97dcb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:14 compute-0 podman[93687]: 2025-10-11 04:27:14.949264464 +0000 UTC m=+0.173808173 container init c48ce2bc6a84b83b0f5df7158e83f30266bc97696653f1ec315b86cdb2855bca (image=quay.io/ceph/ceph:v18, name=naughty_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:27:14 compute-0 podman[93687]: 2025-10-11 04:27:14.957465158 +0000 UTC m=+0.182008847 container start c48ce2bc6a84b83b0f5df7158e83f30266bc97696653f1ec315b86cdb2855bca (image=quay.io/ceph/ceph:v18, name=naughty_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:27:14 compute-0 podman[93687]: 2025-10-11 04:27:14.961257663 +0000 UTC m=+0.185801382 container attach c48ce2bc6a84b83b0f5df7158e83f30266bc97696653f1ec315b86cdb2855bca (image=quay.io/ceph/ceph:v18, name=naughty_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:14 compute-0 youthful_feynman[93627]: {
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:         "osd_id": 1,
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:         "type": "bluestore"
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:     },
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:         "osd_id": 0,
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:         "type": "bluestore"
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:     },
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:         "osd_id": 2,
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:         "type": "bluestore"
Oct 11 04:27:14 compute-0 youthful_feynman[93627]:     }
Oct 11 04:27:14 compute-0 youthful_feynman[93627]: }
Oct 11 04:27:15 compute-0 systemd[1]: libpod-62bd299103694492a2b7556ecfb47453fc3bf465b99995bdf6c69414d79c4550.scope: Deactivated successfully.
Oct 11 04:27:15 compute-0 systemd[1]: libpod-62bd299103694492a2b7556ecfb47453fc3bf465b99995bdf6c69414d79c4550.scope: Consumed 1.166s CPU time.
Oct 11 04:27:15 compute-0 podman[93611]: 2025-10-11 04:27:15.022983341 +0000 UTC m=+1.372832019 container died 62bd299103694492a2b7556ecfb47453fc3bf465b99995bdf6c69414d79c4550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:27:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f8b7f5e54cb6fc38fb519ed79cfc9377377036edfd651ff902a97cfbfcdeb48-merged.mount: Deactivated successfully.
Oct 11 04:27:15 compute-0 podman[93611]: 2025-10-11 04:27:15.084979407 +0000 UTC m=+1.434828055 container remove 62bd299103694492a2b7556ecfb47453fc3bf465b99995bdf6c69414d79c4550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:15 compute-0 systemd[1]: libpod-conmon-62bd299103694492a2b7556ecfb47453fc3bf465b99995bdf6c69414d79c4550.scope: Deactivated successfully.
Oct 11 04:27:15 compute-0 sudo[93486]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:27:15 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:27:15 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:15 compute-0 sudo[93734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:15 compute-0 sudo[93734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:15 compute-0 sudo[93734]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:15 compute-0 ceph-mon[74243]: pgmap v54: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:15 compute-0 ceph-mon[74243]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 04:27:15 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1721471883' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 11 04:27:15 compute-0 ceph-mon[74243]: osdmap e25: 3 total, 3 up, 3 in
Oct 11 04:27:15 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:15 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:15 compute-0 sudo[93759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:27:15 compute-0 sudo[93759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:15 compute-0 sudo[93759]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Oct 11 04:27:15 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Oct 11 04:27:15 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Oct 11 04:27:15 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Oct 11 04:27:15 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:15 compute-0 ceph-mgr[74542]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct 11 04:27:15 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct 11 04:27:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct 11 04:27:15 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 11 04:27:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct 11 04:27:15 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 11 04:27:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:27:15 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:15 compute-0 ceph-mgr[74542]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct 11 04:27:15 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct 11 04:27:15 compute-0 sudo[93803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:15 compute-0 sudo[93803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:15 compute-0 sudo[93803]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:15 compute-0 sudo[93828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:15 compute-0 sudo[93828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:15 compute-0 sudo[93828]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Oct 11 04:27:15 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3074402374' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 11 04:27:15 compute-0 sudo[93853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:15 compute-0 sudo[93853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:15 compute-0 sudo[93853]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:15 compute-0 sudo[93879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:27:15 compute-0 sudo[93879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:15 compute-0 podman[93920]: 2025-10-11 04:27:15.99624177 +0000 UTC m=+0.049533746 container create fd4ca069d58884d53522fd504a020c8e1d484f6abce8075727f59923fc2567e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hodgkin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:27:16 compute-0 systemd[1]: Started libpod-conmon-fd4ca069d58884d53522fd504a020c8e1d484f6abce8075727f59923fc2567e5.scope.
Oct 11 04:27:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v56: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:16 compute-0 podman[93920]: 2025-10-11 04:27:15.97379663 +0000 UTC m=+0.027088636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:16 compute-0 podman[93920]: 2025-10-11 04:27:16.081237708 +0000 UTC m=+0.134529754 container init fd4ca069d58884d53522fd504a020c8e1d484f6abce8075727f59923fc2567e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hodgkin, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:27:16 compute-0 podman[93920]: 2025-10-11 04:27:16.091206857 +0000 UTC m=+0.144498833 container start fd4ca069d58884d53522fd504a020c8e1d484f6abce8075727f59923fc2567e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hodgkin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:27:16 compute-0 podman[93920]: 2025-10-11 04:27:16.094756985 +0000 UTC m=+0.148049041 container attach fd4ca069d58884d53522fd504a020c8e1d484f6abce8075727f59923fc2567e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:27:16 compute-0 lucid_hodgkin[93936]: 167 167
Oct 11 04:27:16 compute-0 systemd[1]: libpod-fd4ca069d58884d53522fd504a020c8e1d484f6abce8075727f59923fc2567e5.scope: Deactivated successfully.
Oct 11 04:27:16 compute-0 podman[93920]: 2025-10-11 04:27:16.096204831 +0000 UTC m=+0.149496817 container died fd4ca069d58884d53522fd504a020c8e1d484f6abce8075727f59923fc2567e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hodgkin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae1fa6f730ac46d65ac1d8f10f82f42f94b1cc95107ab1ecf63f410f65f7cc47-merged.mount: Deactivated successfully.
Oct 11 04:27:16 compute-0 podman[93920]: 2025-10-11 04:27:16.135353757 +0000 UTC m=+0.188645713 container remove fd4ca069d58884d53522fd504a020c8e1d484f6abce8075727f59923fc2567e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 04:27:16 compute-0 systemd[1]: libpod-conmon-fd4ca069d58884d53522fd504a020c8e1d484f6abce8075727f59923fc2567e5.scope: Deactivated successfully.
Oct 11 04:27:16 compute-0 sudo[93879]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:27:16 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:27:16 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:16 compute-0 ceph-mgr[74542]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.phooxi (unknown last config time)...
Oct 11 04:27:16 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.phooxi (unknown last config time)...
Oct 11 04:27:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.phooxi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 11 04:27:16 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.phooxi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 11 04:27:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 11 04:27:16 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 04:27:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:27:16 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:16 compute-0 ceph-mgr[74542]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.phooxi on compute-0
Oct 11 04:27:16 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.phooxi on compute-0
Oct 11 04:27:16 compute-0 sudo[93954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:16 compute-0 sudo[93954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 11 04:27:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 11 04:27:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:16 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3074402374' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 11 04:27:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.phooxi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 11 04:27:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 04:27:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:16 compute-0 sudo[93954]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct 11 04:27:16 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3074402374' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 11 04:27:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Oct 11 04:27:16 compute-0 naughty_proskuriakova[93710]: enabled application 'rbd' on pool 'volumes'
Oct 11 04:27:16 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Oct 11 04:27:16 compute-0 systemd[1]: libpod-c48ce2bc6a84b83b0f5df7158e83f30266bc97696653f1ec315b86cdb2855bca.scope: Deactivated successfully.
Oct 11 04:27:16 compute-0 podman[93687]: 2025-10-11 04:27:16.418862633 +0000 UTC m=+1.643406332 container died c48ce2bc6a84b83b0f5df7158e83f30266bc97696653f1ec315b86cdb2855bca (image=quay.io/ceph/ceph:v18, name=naughty_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-a87b550bc8a14b93379ed8b354550653a4a3e8b2a2d3a632392a026dc6d97dcb-merged.mount: Deactivated successfully.
Oct 11 04:27:16 compute-0 sudo[93979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:16 compute-0 sudo[93979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:16 compute-0 sudo[93979]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:16 compute-0 podman[93687]: 2025-10-11 04:27:16.477522175 +0000 UTC m=+1.702065864 container remove c48ce2bc6a84b83b0f5df7158e83f30266bc97696653f1ec315b86cdb2855bca (image=quay.io/ceph/ceph:v18, name=naughty_proskuriakova, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:27:16 compute-0 systemd[1]: libpod-conmon-c48ce2bc6a84b83b0f5df7158e83f30266bc97696653f1ec315b86cdb2855bca.scope: Deactivated successfully.
Oct 11 04:27:16 compute-0 sudo[93671]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:16 compute-0 sudo[94017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:16 compute-0 sudo[94017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:16 compute-0 sudo[94017]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:16 compute-0 sudo[94042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:27:16 compute-0 sudo[94042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:16 compute-0 sudo[94090]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nazzdpjtbwspmrbytpofovcklddpvwkf ; /usr/bin/python3'
Oct 11 04:27:16 compute-0 sudo[94090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:16 compute-0 python3[94092]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:16 compute-0 podman[94105]: 2025-10-11 04:27:16.905189944 +0000 UTC m=+0.067338629 container create b3dc01affb2b889ddf62156dcf68d979a3b4f17ab47c10d0bab9a4cbac9cc8cb (image=quay.io/ceph/ceph:v18, name=nifty_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:27:16 compute-0 systemd[1]: Started libpod-conmon-b3dc01affb2b889ddf62156dcf68d979a3b4f17ab47c10d0bab9a4cbac9cc8cb.scope.
Oct 11 04:27:16 compute-0 podman[94121]: 2025-10-11 04:27:16.94313293 +0000 UTC m=+0.055872924 container create 9e5debac7358c71018aa663f1f2c50ebb77336bad71a94cfd9cef6d1423312fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:16 compute-0 podman[94105]: 2025-10-11 04:27:16.875563526 +0000 UTC m=+0.037712241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2783bf76d36af57bd213da10c14cfe64442f671abcca61c0229a6fef83d125ec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2783bf76d36af57bd213da10c14cfe64442f671abcca61c0229a6fef83d125ec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:16 compute-0 systemd[1]: Started libpod-conmon-9e5debac7358c71018aa663f1f2c50ebb77336bad71a94cfd9cef6d1423312fe.scope.
Oct 11 04:27:16 compute-0 podman[94105]: 2025-10-11 04:27:16.994985533 +0000 UTC m=+0.157134238 container init b3dc01affb2b889ddf62156dcf68d979a3b4f17ab47c10d0bab9a4cbac9cc8cb (image=quay.io/ceph/ceph:v18, name=nifty_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:27:17 compute-0 podman[94105]: 2025-10-11 04:27:17.004747736 +0000 UTC m=+0.166896441 container start b3dc01affb2b889ddf62156dcf68d979a3b4f17ab47c10d0bab9a4cbac9cc8cb (image=quay.io/ceph/ceph:v18, name=nifty_ritchie, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:17 compute-0 podman[94105]: 2025-10-11 04:27:17.008482369 +0000 UTC m=+0.170631044 container attach b3dc01affb2b889ddf62156dcf68d979a3b4f17ab47c10d0bab9a4cbac9cc8cb (image=quay.io/ceph/ceph:v18, name=nifty_ritchie, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:27:17 compute-0 podman[94121]: 2025-10-11 04:27:16.923358157 +0000 UTC m=+0.036098191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:17 compute-0 podman[94121]: 2025-10-11 04:27:17.020110879 +0000 UTC m=+0.132850883 container init 9e5debac7358c71018aa663f1f2c50ebb77336bad71a94cfd9cef6d1423312fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:17 compute-0 podman[94121]: 2025-10-11 04:27:17.02898545 +0000 UTC m=+0.141725454 container start 9e5debac7358c71018aa663f1f2c50ebb77336bad71a94cfd9cef6d1423312fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:17 compute-0 podman[94121]: 2025-10-11 04:27:17.032268452 +0000 UTC m=+0.145008456 container attach 9e5debac7358c71018aa663f1f2c50ebb77336bad71a94cfd9cef6d1423312fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:27:17 compute-0 musing_herschel[94142]: 167 167
Oct 11 04:27:17 compute-0 systemd[1]: libpod-9e5debac7358c71018aa663f1f2c50ebb77336bad71a94cfd9cef6d1423312fe.scope: Deactivated successfully.
Oct 11 04:27:17 compute-0 podman[94121]: 2025-10-11 04:27:17.035622295 +0000 UTC m=+0.148362329 container died 9e5debac7358c71018aa663f1f2c50ebb77336bad71a94cfd9cef6d1423312fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_herschel, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6d44100dc4f8d17b7c07f28b93ecc69b8c1a3aa9b2bc2df2e2aa78b72f1c59e-merged.mount: Deactivated successfully.
Oct 11 04:27:17 compute-0 podman[94121]: 2025-10-11 04:27:17.075283934 +0000 UTC m=+0.188023928 container remove 9e5debac7358c71018aa663f1f2c50ebb77336bad71a94cfd9cef6d1423312fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:17 compute-0 systemd[1]: libpod-conmon-9e5debac7358c71018aa663f1f2c50ebb77336bad71a94cfd9cef6d1423312fe.scope: Deactivated successfully.
Oct 11 04:27:17 compute-0 sudo[94042]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:17 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:27:17 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:17 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:27:17 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:17 compute-0 sudo[94162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:17 compute-0 sudo[94162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:17 compute-0 sudo[94162]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:17 compute-0 sudo[94187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:17 compute-0 sudo[94187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:17 compute-0 sudo[94187]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:17 compute-0 sudo[94212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:17 compute-0 sudo[94212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:17 compute-0 sudo[94212]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:17 compute-0 ceph-mon[74243]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct 11 04:27:17 compute-0 ceph-mon[74243]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 11 04:27:17 compute-0 ceph-mon[74243]: pgmap v56: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:17 compute-0 ceph-mon[74243]: Reconfiguring mgr.compute-0.phooxi (unknown last config time)...
Oct 11 04:27:17 compute-0 ceph-mon[74243]: Reconfiguring daemon mgr.compute-0.phooxi on compute-0
Oct 11 04:27:17 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3074402374' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 11 04:27:17 compute-0 ceph-mon[74243]: osdmap e26: 3 total, 3 up, 3 in
Oct 11 04:27:17 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:17 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:17 compute-0 sudo[94237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 04:27:17 compute-0 sudo[94237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:17 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Oct 11 04:27:17 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2968104822' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 11 04:27:17 compute-0 podman[94353]: 2025-10-11 04:27:17.996220978 +0000 UTC m=+0.067967365 container exec 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:27:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v58: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:18 compute-0 podman[94353]: 2025-10-11 04:27:18.106773694 +0000 UTC m=+0.178520081 container exec_died 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:18 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct 11 04:27:18 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2968104822' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 11 04:27:18 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2968104822' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 11 04:27:18 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Oct 11 04:27:18 compute-0 nifty_ritchie[94137]: enabled application 'rbd' on pool 'backups'
Oct 11 04:27:18 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Oct 11 04:27:18 compute-0 systemd[1]: libpod-b3dc01affb2b889ddf62156dcf68d979a3b4f17ab47c10d0bab9a4cbac9cc8cb.scope: Deactivated successfully.
Oct 11 04:27:18 compute-0 podman[94105]: 2025-10-11 04:27:18.42117148 +0000 UTC m=+1.583320185 container died b3dc01affb2b889ddf62156dcf68d979a3b4f17ab47c10d0bab9a4cbac9cc8cb (image=quay.io/ceph/ceph:v18, name=nifty_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-2783bf76d36af57bd213da10c14cfe64442f671abcca61c0229a6fef83d125ec-merged.mount: Deactivated successfully.
Oct 11 04:27:18 compute-0 podman[94105]: 2025-10-11 04:27:18.476892539 +0000 UTC m=+1.639041224 container remove b3dc01affb2b889ddf62156dcf68d979a3b4f17ab47c10d0bab9a4cbac9cc8cb (image=quay.io/ceph/ceph:v18, name=nifty_ritchie, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:27:18 compute-0 systemd[1]: libpod-conmon-b3dc01affb2b889ddf62156dcf68d979a3b4f17ab47c10d0bab9a4cbac9cc8cb.scope: Deactivated successfully.
Oct 11 04:27:18 compute-0 sudo[94090]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:18 compute-0 sudo[94494]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xppykcehxvsmhczovpfsjbbnmrrdwipx ; /usr/bin/python3'
Oct 11 04:27:18 compute-0 sudo[94494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:18 compute-0 sudo[94237]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:18 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:27:18 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:18 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:27:18 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:18 compute-0 python3[94498]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:18 compute-0 sudo[94514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:18 compute-0 sudo[94514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:18 compute-0 sudo[94514]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:18 compute-0 podman[94538]: 2025-10-11 04:27:18.949776486 +0000 UTC m=+0.062909129 container create a5d6c34248d45ac2f53517bb647a26304d8b04950a42f0602ad2553934610775 (image=quay.io/ceph/ceph:v18, name=admiring_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:27:18 compute-0 sudo[94546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:18 compute-0 sudo[94546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:18 compute-0 sudo[94546]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:18 compute-0 systemd[1]: Started libpod-conmon-a5d6c34248d45ac2f53517bb647a26304d8b04950a42f0602ad2553934610775.scope.
Oct 11 04:27:19 compute-0 podman[94538]: 2025-10-11 04:27:18.918907866 +0000 UTC m=+0.032040599 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:19 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9fe9280ddeea1f5ecd0513dcf9c3923ba5e8d48b8165a004f2789352f37c768/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9fe9280ddeea1f5ecd0513dcf9c3923ba5e8d48b8165a004f2789352f37c768/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:19 compute-0 podman[94538]: 2025-10-11 04:27:19.052077826 +0000 UTC m=+0.165210509 container init a5d6c34248d45ac2f53517bb647a26304d8b04950a42f0602ad2553934610775 (image=quay.io/ceph/ceph:v18, name=admiring_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:27:19 compute-0 sudo[94580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:19 compute-0 sudo[94580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:19 compute-0 podman[94538]: 2025-10-11 04:27:19.059643604 +0000 UTC m=+0.172776237 container start a5d6c34248d45ac2f53517bb647a26304d8b04950a42f0602ad2553934610775 (image=quay.io/ceph/ceph:v18, name=admiring_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:27:19 compute-0 sudo[94580]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:19 compute-0 podman[94538]: 2025-10-11 04:27:19.06389504 +0000 UTC m=+0.177027723 container attach a5d6c34248d45ac2f53517bb647a26304d8b04950a42f0602ad2553934610775 (image=quay.io/ceph/ceph:v18, name=admiring_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:27:19 compute-0 sudo[94609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:27:19 compute-0 sudo[94609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:19 compute-0 ceph-mon[74243]: pgmap v58: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:19 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2968104822' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 11 04:27:19 compute-0 ceph-mon[74243]: osdmap e27: 3 total, 3 up, 3 in
Oct 11 04:27:19 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:19 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:19 compute-0 sudo[94609]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Oct 11 04:27:19 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2598735980' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 11 04:27:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:27:19 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:27:19 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:27:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:27:19 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:19 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 57c76fa2-5734-40f3-a0f5-908c20fe62b6 does not exist
Oct 11 04:27:19 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 27b0bcc9-7203-4ae4-9737-5609bd2f8839 does not exist
Oct 11 04:27:19 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev cf7278f9-ac01-4c7a-b4d4-a0a4eb612833 does not exist
Oct 11 04:27:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:27:19 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:27:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:27:19 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:27:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:27:19 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:19 compute-0 sudo[94685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:19 compute-0 sudo[94685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:19 compute-0 sudo[94685]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:19 compute-0 ceph-mon[74243]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 04:27:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:19 compute-0 sudo[94710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:19 compute-0 sudo[94710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:19 compute-0 sudo[94710]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:19 compute-0 sudo[94735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:19 compute-0 sudo[94735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:19 compute-0 sudo[94735]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:19 compute-0 sudo[94760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:27:19 compute-0 sudo[94760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:20 compute-0 podman[94825]: 2025-10-11 04:27:20.359890432 +0000 UTC m=+0.064494309 container create 68e17a34cecee8f7c86d8522ad3dcf73eb3a005e9c08cca75ab0fed67f8aa162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:27:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct 11 04:27:20 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2598735980' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 11 04:27:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:27:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:27:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:27:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:20 compute-0 ceph-mon[74243]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 04:27:20 compute-0 ceph-mon[74243]: pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:20 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2598735980' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 11 04:27:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Oct 11 04:27:20 compute-0 admiring_ritchie[94581]: enabled application 'rbd' on pool 'images'
Oct 11 04:27:20 compute-0 systemd[1]: Started libpod-conmon-68e17a34cecee8f7c86d8522ad3dcf73eb3a005e9c08cca75ab0fed67f8aa162.scope.
Oct 11 04:27:20 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Oct 11 04:27:20 compute-0 podman[94825]: 2025-10-11 04:27:20.33452758 +0000 UTC m=+0.039131517 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:20 compute-0 podman[94538]: 2025-10-11 04:27:20.427528278 +0000 UTC m=+1.540660921 container died a5d6c34248d45ac2f53517bb647a26304d8b04950a42f0602ad2553934610775 (image=quay.io/ceph/ceph:v18, name=admiring_ritchie, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Oct 11 04:27:20 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:20 compute-0 systemd[1]: libpod-a5d6c34248d45ac2f53517bb647a26304d8b04950a42f0602ad2553934610775.scope: Deactivated successfully.
Oct 11 04:27:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9fe9280ddeea1f5ecd0513dcf9c3923ba5e8d48b8165a004f2789352f37c768-merged.mount: Deactivated successfully.
Oct 11 04:27:20 compute-0 podman[94825]: 2025-10-11 04:27:20.472434797 +0000 UTC m=+0.177038824 container init 68e17a34cecee8f7c86d8522ad3dcf73eb3a005e9c08cca75ab0fed67f8aa162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 04:27:20 compute-0 podman[94538]: 2025-10-11 04:27:20.477413421 +0000 UTC m=+1.590546064 container remove a5d6c34248d45ac2f53517bb647a26304d8b04950a42f0602ad2553934610775 (image=quay.io/ceph/ceph:v18, name=admiring_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:20 compute-0 podman[94825]: 2025-10-11 04:27:20.482543019 +0000 UTC m=+0.187146886 container start 68e17a34cecee8f7c86d8522ad3dcf73eb3a005e9c08cca75ab0fed67f8aa162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_banzai, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 04:27:20 compute-0 practical_banzai[94842]: 167 167
Oct 11 04:27:20 compute-0 podman[94825]: 2025-10-11 04:27:20.486746004 +0000 UTC m=+0.191349941 container attach 68e17a34cecee8f7c86d8522ad3dcf73eb3a005e9c08cca75ab0fed67f8aa162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:27:20 compute-0 systemd[1]: libpod-68e17a34cecee8f7c86d8522ad3dcf73eb3a005e9c08cca75ab0fed67f8aa162.scope: Deactivated successfully.
Oct 11 04:27:20 compute-0 podman[94825]: 2025-10-11 04:27:20.488075367 +0000 UTC m=+0.192679284 container died 68e17a34cecee8f7c86d8522ad3dcf73eb3a005e9c08cca75ab0fed67f8aa162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:27:20 compute-0 systemd[1]: libpod-conmon-a5d6c34248d45ac2f53517bb647a26304d8b04950a42f0602ad2553934610775.scope: Deactivated successfully.
Oct 11 04:27:20 compute-0 sudo[94494]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-386f7bbf5bdd1eb0f6100fc220a50aad6e0b88a7ee38469fd95396aa66b8f128-merged.mount: Deactivated successfully.
Oct 11 04:27:20 compute-0 podman[94825]: 2025-10-11 04:27:20.534619517 +0000 UTC m=+0.239223384 container remove 68e17a34cecee8f7c86d8522ad3dcf73eb3a005e9c08cca75ab0fed67f8aa162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_banzai, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:20 compute-0 systemd[1]: libpod-conmon-68e17a34cecee8f7c86d8522ad3dcf73eb3a005e9c08cca75ab0fed67f8aa162.scope: Deactivated successfully.
Oct 11 04:27:20 compute-0 sudo[94897]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewjrdaxmoxhbdelkjkcdjvxshsnulbnj ; /usr/bin/python3'
Oct 11 04:27:20 compute-0 sudo[94897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:20 compute-0 podman[94905]: 2025-10-11 04:27:20.75981517 +0000 UTC m=+0.070511549 container create 913a697971f4c8d28e54441903f4a53ee7c4914dd4d6e2804945be38d5100285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_moore, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:20 compute-0 python3[94899]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:20 compute-0 systemd[1]: Started libpod-conmon-913a697971f4c8d28e54441903f4a53ee7c4914dd4d6e2804945be38d5100285.scope.
Oct 11 04:27:20 compute-0 podman[94905]: 2025-10-11 04:27:20.727165616 +0000 UTC m=+0.037862045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:20 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ea70bda8e7b8121381b8df33d001390cf59f4f9c1c957ca1b1cfb783e18c8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ea70bda8e7b8121381b8df33d001390cf59f4f9c1c957ca1b1cfb783e18c8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ea70bda8e7b8121381b8df33d001390cf59f4f9c1c957ca1b1cfb783e18c8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ea70bda8e7b8121381b8df33d001390cf59f4f9c1c957ca1b1cfb783e18c8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ea70bda8e7b8121381b8df33d001390cf59f4f9c1c957ca1b1cfb783e18c8b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:20 compute-0 podman[94905]: 2025-10-11 04:27:20.872211861 +0000 UTC m=+0.182908290 container init 913a697971f4c8d28e54441903f4a53ee7c4914dd4d6e2804945be38d5100285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:27:20 compute-0 podman[94919]: 2025-10-11 04:27:20.876747095 +0000 UTC m=+0.076897028 container create ded0d14a64019c3b1adb46ea4402ae79aa5794770a3ae0d24f8693ea21a2cb83 (image=quay.io/ceph/ceph:v18, name=lucid_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:20 compute-0 podman[94905]: 2025-10-11 04:27:20.892505067 +0000 UTC m=+0.203201456 container start 913a697971f4c8d28e54441903f4a53ee7c4914dd4d6e2804945be38d5100285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:20 compute-0 podman[94905]: 2025-10-11 04:27:20.896755833 +0000 UTC m=+0.207452272 container attach 913a697971f4c8d28e54441903f4a53ee7c4914dd4d6e2804945be38d5100285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_moore, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:27:20 compute-0 systemd[1]: Started libpod-conmon-ded0d14a64019c3b1adb46ea4402ae79aa5794770a3ae0d24f8693ea21a2cb83.scope.
Oct 11 04:27:20 compute-0 podman[94919]: 2025-10-11 04:27:20.843122986 +0000 UTC m=+0.043272999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:20 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d018bcc759e183e06be95fd83f0dc7095d230f807961454b84043aaa60681913/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d018bcc759e183e06be95fd83f0dc7095d230f807961454b84043aaa60681913/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:20 compute-0 podman[94919]: 2025-10-11 04:27:20.966799419 +0000 UTC m=+0.166949352 container init ded0d14a64019c3b1adb46ea4402ae79aa5794770a3ae0d24f8693ea21a2cb83 (image=quay.io/ceph/ceph:v18, name=lucid_raman, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:27:20 compute-0 podman[94919]: 2025-10-11 04:27:20.97647094 +0000 UTC m=+0.176620883 container start ded0d14a64019c3b1adb46ea4402ae79aa5794770a3ae0d24f8693ea21a2cb83 (image=quay.io/ceph/ceph:v18, name=lucid_raman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:20 compute-0 podman[94919]: 2025-10-11 04:27:20.979922606 +0000 UTC m=+0.180072559 container attach ded0d14a64019c3b1adb46ea4402ae79aa5794770a3ae0d24f8693ea21a2cb83 (image=quay.io/ceph/ceph:v18, name=lucid_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:27:21 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2598735980' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 11 04:27:21 compute-0 ceph-mon[74243]: osdmap e28: 3 total, 3 up, 3 in
Oct 11 04:27:21 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Oct 11 04:27:21 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2542410947' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 11 04:27:21 compute-0 dreamy_moore[94929]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:27:21 compute-0 dreamy_moore[94929]: --> relative data size: 1.0
Oct 11 04:27:21 compute-0 dreamy_moore[94929]: --> All data devices are unavailable
Oct 11 04:27:22 compute-0 systemd[1]: libpod-913a697971f4c8d28e54441903f4a53ee7c4914dd4d6e2804945be38d5100285.scope: Deactivated successfully.
Oct 11 04:27:22 compute-0 systemd[1]: libpod-913a697971f4c8d28e54441903f4a53ee7c4914dd4d6e2804945be38d5100285.scope: Consumed 1.068s CPU time.
Oct 11 04:27:22 compute-0 podman[94905]: 2025-10-11 04:27:22.005999231 +0000 UTC m=+1.316695690 container died 913a697971f4c8d28e54441903f4a53ee7c4914dd4d6e2804945be38d5100285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_moore, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:27:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-16ea70bda8e7b8121381b8df33d001390cf59f4f9c1c957ca1b1cfb783e18c8b-merged.mount: Deactivated successfully.
Oct 11 04:27:22 compute-0 podman[94905]: 2025-10-11 04:27:22.073221207 +0000 UTC m=+1.383917566 container remove 913a697971f4c8d28e54441903f4a53ee7c4914dd4d6e2804945be38d5100285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_moore, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 04:27:22 compute-0 systemd[1]: libpod-conmon-913a697971f4c8d28e54441903f4a53ee7c4914dd4d6e2804945be38d5100285.scope: Deactivated successfully.
Oct 11 04:27:22 compute-0 sudo[94760]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:22 compute-0 sudo[95003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:22 compute-0 sudo[95003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:22 compute-0 sudo[95003]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:22 compute-0 sudo[95028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:22 compute-0 sudo[95028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:22 compute-0 sudo[95028]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:22 compute-0 sudo[95053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:22 compute-0 sudo[95053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:22 compute-0 sudo[95053]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:22 compute-0 sudo[95078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:27:22 compute-0 sudo[95078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:22 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct 11 04:27:22 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2542410947' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 11 04:27:22 compute-0 ceph-mon[74243]: pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:22 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2542410947' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 11 04:27:22 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Oct 11 04:27:22 compute-0 lucid_raman[94941]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct 11 04:27:22 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Oct 11 04:27:22 compute-0 podman[94919]: 2025-10-11 04:27:22.439952398 +0000 UTC m=+1.640102331 container died ded0d14a64019c3b1adb46ea4402ae79aa5794770a3ae0d24f8693ea21a2cb83 (image=quay.io/ceph/ceph:v18, name=lucid_raman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:22 compute-0 systemd[1]: libpod-ded0d14a64019c3b1adb46ea4402ae79aa5794770a3ae0d24f8693ea21a2cb83.scope: Deactivated successfully.
Oct 11 04:27:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d018bcc759e183e06be95fd83f0dc7095d230f807961454b84043aaa60681913-merged.mount: Deactivated successfully.
Oct 11 04:27:22 compute-0 podman[94919]: 2025-10-11 04:27:22.492852046 +0000 UTC m=+1.693001979 container remove ded0d14a64019c3b1adb46ea4402ae79aa5794770a3ae0d24f8693ea21a2cb83 (image=quay.io/ceph/ceph:v18, name=lucid_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:22 compute-0 systemd[1]: libpod-conmon-ded0d14a64019c3b1adb46ea4402ae79aa5794770a3ae0d24f8693ea21a2cb83.scope: Deactivated successfully.
Oct 11 04:27:22 compute-0 sudo[94897]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:22 compute-0 sudo[95183]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egprqhsridqdhevwgaybiahjefaogvax ; /usr/bin/python3'
Oct 11 04:27:22 compute-0 sudo[95183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:22 compute-0 podman[95171]: 2025-10-11 04:27:22.641381048 +0000 UTC m=+0.036123121 container create f564ab2afd68f0b334c62b7904e49d5af399bc0b409c98e256e3337b972e7936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:22 compute-0 systemd[1]: Started libpod-conmon-f564ab2afd68f0b334c62b7904e49d5af399bc0b409c98e256e3337b972e7936.scope.
Oct 11 04:27:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:22 compute-0 podman[95171]: 2025-10-11 04:27:22.7064568 +0000 UTC m=+0.101198873 container init f564ab2afd68f0b334c62b7904e49d5af399bc0b409c98e256e3337b972e7936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_maxwell, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 04:27:22 compute-0 podman[95171]: 2025-10-11 04:27:22.713043055 +0000 UTC m=+0.107785128 container start f564ab2afd68f0b334c62b7904e49d5af399bc0b409c98e256e3337b972e7936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_maxwell, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:27:22 compute-0 podman[95171]: 2025-10-11 04:27:22.715914996 +0000 UTC m=+0.110657069 container attach f564ab2afd68f0b334c62b7904e49d5af399bc0b409c98e256e3337b972e7936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_maxwell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 04:27:22 compute-0 sad_maxwell[95195]: 167 167
Oct 11 04:27:22 compute-0 systemd[1]: libpod-f564ab2afd68f0b334c62b7904e49d5af399bc0b409c98e256e3337b972e7936.scope: Deactivated successfully.
Oct 11 04:27:22 compute-0 podman[95171]: 2025-10-11 04:27:22.717104136 +0000 UTC m=+0.111846199 container died f564ab2afd68f0b334c62b7904e49d5af399bc0b409c98e256e3337b972e7936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_maxwell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:27:22 compute-0 podman[95171]: 2025-10-11 04:27:22.624512598 +0000 UTC m=+0.019254691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1687e9c9605d03af3f196411b08e156cb2dfbccbfca7e1468963503be2d26836-merged.mount: Deactivated successfully.
Oct 11 04:27:22 compute-0 podman[95171]: 2025-10-11 04:27:22.747616066 +0000 UTC m=+0.142358169 container remove f564ab2afd68f0b334c62b7904e49d5af399bc0b409c98e256e3337b972e7936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_maxwell, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:27:22 compute-0 systemd[1]: libpod-conmon-f564ab2afd68f0b334c62b7904e49d5af399bc0b409c98e256e3337b972e7936.scope: Deactivated successfully.
Oct 11 04:27:22 compute-0 python3[95191]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:22 compute-0 podman[95217]: 2025-10-11 04:27:22.858737046 +0000 UTC m=+0.044675494 container create 214e161fd6c2f2c8d1ae316b4ea60cc471ac1bdd91fc034325778732925acf0e (image=quay.io/ceph/ceph:v18, name=blissful_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:27:22 compute-0 systemd[1]: Started libpod-conmon-214e161fd6c2f2c8d1ae316b4ea60cc471ac1bdd91fc034325778732925acf0e.scope.
Oct 11 04:27:22 compute-0 podman[95235]: 2025-10-11 04:27:22.914985618 +0000 UTC m=+0.040026089 container create d36e61c3fa2a344b548485df932813e0e8ee943b27e73a422ec2de09ae17415c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hodgkin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0225fcd7392ad48545979eeeb9b6b720cd0aeb1d35646080b468608f8d9bf12/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0225fcd7392ad48545979eeeb9b6b720cd0aeb1d35646080b468608f8d9bf12/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:22 compute-0 podman[95217]: 2025-10-11 04:27:22.84084719 +0000 UTC m=+0.026785678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:22 compute-0 podman[95217]: 2025-10-11 04:27:22.941629362 +0000 UTC m=+0.127567910 container init 214e161fd6c2f2c8d1ae316b4ea60cc471ac1bdd91fc034325778732925acf0e (image=quay.io/ceph/ceph:v18, name=blissful_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:27:22 compute-0 podman[95217]: 2025-10-11 04:27:22.951456377 +0000 UTC m=+0.137394835 container start 214e161fd6c2f2c8d1ae316b4ea60cc471ac1bdd91fc034325778732925acf0e (image=quay.io/ceph/ceph:v18, name=blissful_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:22 compute-0 podman[95217]: 2025-10-11 04:27:22.954897723 +0000 UTC m=+0.140836171 container attach 214e161fd6c2f2c8d1ae316b4ea60cc471ac1bdd91fc034325778732925acf0e (image=quay.io/ceph/ceph:v18, name=blissful_aryabhata, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:22 compute-0 systemd[1]: Started libpod-conmon-d36e61c3fa2a344b548485df932813e0e8ee943b27e73a422ec2de09ae17415c.scope.
Oct 11 04:27:22 compute-0 podman[95235]: 2025-10-11 04:27:22.897075992 +0000 UTC m=+0.022116473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b92c036c648f6a70bd1004a3cd4e0164f7cb3b667010762a9cfc1e2f8077922/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b92c036c648f6a70bd1004a3cd4e0164f7cb3b667010762a9cfc1e2f8077922/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b92c036c648f6a70bd1004a3cd4e0164f7cb3b667010762a9cfc1e2f8077922/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b92c036c648f6a70bd1004a3cd4e0164f7cb3b667010762a9cfc1e2f8077922/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:23 compute-0 podman[95235]: 2025-10-11 04:27:23.019730359 +0000 UTC m=+0.144770840 container init d36e61c3fa2a344b548485df932813e0e8ee943b27e73a422ec2de09ae17415c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hodgkin, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:23 compute-0 podman[95235]: 2025-10-11 04:27:23.031152644 +0000 UTC m=+0.156193105 container start d36e61c3fa2a344b548485df932813e0e8ee943b27e73a422ec2de09ae17415c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hodgkin, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:23 compute-0 podman[95235]: 2025-10-11 04:27:23.034269211 +0000 UTC m=+0.159309662 container attach d36e61c3fa2a344b548485df932813e0e8ee943b27e73a422ec2de09ae17415c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hodgkin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:27:23 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2542410947' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 11 04:27:23 compute-0 ceph-mon[74243]: osdmap e29: 3 total, 3 up, 3 in
Oct 11 04:27:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Oct 11 04:27:23 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1395550534' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]: {
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:     "0": [
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:         {
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "devices": [
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "/dev/loop3"
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             ],
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "lv_name": "ceph_lv0",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "lv_size": "21470642176",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "name": "ceph_lv0",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "tags": {
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.cluster_name": "ceph",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.crush_device_class": "",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.encrypted": "0",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.osd_id": "0",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.type": "block",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.vdo": "0"
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             },
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "type": "block",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "vg_name": "ceph_vg0"
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:         }
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:     ],
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:     "1": [
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:         {
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "devices": [
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "/dev/loop4"
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             ],
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "lv_name": "ceph_lv1",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "lv_size": "21470642176",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "name": "ceph_lv1",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "tags": {
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.cluster_name": "ceph",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.crush_device_class": "",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.encrypted": "0",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.osd_id": "1",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.type": "block",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.vdo": "0"
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             },
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "type": "block",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "vg_name": "ceph_vg1"
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:         }
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:     ],
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:     "2": [
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:         {
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "devices": [
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "/dev/loop5"
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             ],
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "lv_name": "ceph_lv2",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "lv_size": "21470642176",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "name": "ceph_lv2",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "tags": {
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.cluster_name": "ceph",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.crush_device_class": "",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.encrypted": "0",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.osd_id": "2",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.type": "block",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:                 "ceph.vdo": "0"
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             },
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "type": "block",
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:             "vg_name": "ceph_vg2"
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:         }
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]:     ]
Oct 11 04:27:23 compute-0 loving_hodgkin[95257]: }
Oct 11 04:27:23 compute-0 systemd[1]: libpod-d36e61c3fa2a344b548485df932813e0e8ee943b27e73a422ec2de09ae17415c.scope: Deactivated successfully.
Oct 11 04:27:23 compute-0 podman[95235]: 2025-10-11 04:27:23.815833031 +0000 UTC m=+0.940873492 container died d36e61c3fa2a344b548485df932813e0e8ee943b27e73a422ec2de09ae17415c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hodgkin, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b92c036c648f6a70bd1004a3cd4e0164f7cb3b667010762a9cfc1e2f8077922-merged.mount: Deactivated successfully.
Oct 11 04:27:23 compute-0 podman[95235]: 2025-10-11 04:27:23.871932119 +0000 UTC m=+0.996972580 container remove d36e61c3fa2a344b548485df932813e0e8ee943b27e73a422ec2de09ae17415c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hodgkin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 04:27:23 compute-0 systemd[1]: libpod-conmon-d36e61c3fa2a344b548485df932813e0e8ee943b27e73a422ec2de09ae17415c.scope: Deactivated successfully.
Oct 11 04:27:23 compute-0 sudo[95078]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:23 compute-0 sudo[95297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:23 compute-0 sudo[95297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:23 compute-0 sudo[95297]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:24 compute-0 sudo[95322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:24 compute-0 sudo[95322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:24 compute-0 sudo[95322]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:24 compute-0 sudo[95347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:24 compute-0 sudo[95347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:24 compute-0 sudo[95347]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:24 compute-0 sudo[95372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:27:24 compute-0 sudo[95372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct 11 04:27:24 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1395550534' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 11 04:27:24 compute-0 ceph-mon[74243]: pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:24 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1395550534' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 11 04:27:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Oct 11 04:27:24 compute-0 blissful_aryabhata[95248]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct 11 04:27:24 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Oct 11 04:27:24 compute-0 podman[95436]: 2025-10-11 04:27:24.457700379 +0000 UTC m=+0.036731156 container create 3e119570365f695f9c861ca57f04a0bc31799c6b5eaebbe92ce59c3d69f20e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 04:27:24 compute-0 systemd[1]: libpod-214e161fd6c2f2c8d1ae316b4ea60cc471ac1bdd91fc034325778732925acf0e.scope: Deactivated successfully.
Oct 11 04:27:24 compute-0 systemd[1]: Started libpod-conmon-3e119570365f695f9c861ca57f04a0bc31799c6b5eaebbe92ce59c3d69f20e94.scope.
Oct 11 04:27:24 compute-0 podman[95451]: 2025-10-11 04:27:24.532166006 +0000 UTC m=+0.038177553 container died 214e161fd6c2f2c8d1ae316b4ea60cc471ac1bdd91fc034325778732925acf0e (image=quay.io/ceph/ceph:v18, name=blissful_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:24 compute-0 podman[95436]: 2025-10-11 04:27:24.442932351 +0000 UTC m=+0.021963158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0225fcd7392ad48545979eeeb9b6b720cd0aeb1d35646080b468608f8d9bf12-merged.mount: Deactivated successfully.
Oct 11 04:27:24 compute-0 podman[95436]: 2025-10-11 04:27:24.570105521 +0000 UTC m=+0.149136348 container init 3e119570365f695f9c861ca57f04a0bc31799c6b5eaebbe92ce59c3d69f20e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mcnulty, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 04:27:24 compute-0 podman[95451]: 2025-10-11 04:27:24.57529168 +0000 UTC m=+0.081303137 container remove 214e161fd6c2f2c8d1ae316b4ea60cc471ac1bdd91fc034325778732925acf0e (image=quay.io/ceph/ceph:v18, name=blissful_aryabhata, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:27:24 compute-0 podman[95436]: 2025-10-11 04:27:24.577712511 +0000 UTC m=+0.156743298 container start 3e119570365f695f9c861ca57f04a0bc31799c6b5eaebbe92ce59c3d69f20e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:27:24 compute-0 hungry_mcnulty[95459]: 167 167
Oct 11 04:27:24 compute-0 podman[95436]: 2025-10-11 04:27:24.580810728 +0000 UTC m=+0.159841515 container attach 3e119570365f695f9c861ca57f04a0bc31799c6b5eaebbe92ce59c3d69f20e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mcnulty, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:24 compute-0 podman[95436]: 2025-10-11 04:27:24.581496075 +0000 UTC m=+0.160526862 container died 3e119570365f695f9c861ca57f04a0bc31799c6b5eaebbe92ce59c3d69f20e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mcnulty, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:27:24 compute-0 systemd[1]: libpod-3e119570365f695f9c861ca57f04a0bc31799c6b5eaebbe92ce59c3d69f20e94.scope: Deactivated successfully.
Oct 11 04:27:24 compute-0 systemd[1]: libpod-conmon-214e161fd6c2f2c8d1ae316b4ea60cc471ac1bdd91fc034325778732925acf0e.scope: Deactivated successfully.
Oct 11 04:27:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a53a117f4ca96524d6e6cd5a43b791a7b27b1841e7f1167bfc44b47c1c46a7c-merged.mount: Deactivated successfully.
Oct 11 04:27:24 compute-0 sudo[95183]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:24 compute-0 podman[95436]: 2025-10-11 04:27:24.625666306 +0000 UTC m=+0.204697083 container remove 3e119570365f695f9c861ca57f04a0bc31799c6b5eaebbe92ce59c3d69f20e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mcnulty, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:24 compute-0 systemd[1]: libpod-conmon-3e119570365f695f9c861ca57f04a0bc31799c6b5eaebbe92ce59c3d69f20e94.scope: Deactivated successfully.
Oct 11 04:27:24 compute-0 ceph-mon[74243]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 04:27:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:24 compute-0 podman[95491]: 2025-10-11 04:27:24.777205923 +0000 UTC m=+0.051629538 container create f9196eed1f67ea218a6c1f627bfc427b7e90665233a096161a45d87b9cd83372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hellman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:24 compute-0 systemd[1]: Started libpod-conmon-f9196eed1f67ea218a6c1f627bfc427b7e90665233a096161a45d87b9cd83372.scope.
Oct 11 04:27:24 compute-0 podman[95491]: 2025-10-11 04:27:24.749834111 +0000 UTC m=+0.024257786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaaf742fe6ef2e9d7b1aa44b1b5b3a0924d64345a96895663160a2c0a7e2bd17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaaf742fe6ef2e9d7b1aa44b1b5b3a0924d64345a96895663160a2c0a7e2bd17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaaf742fe6ef2e9d7b1aa44b1b5b3a0924d64345a96895663160a2c0a7e2bd17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaaf742fe6ef2e9d7b1aa44b1b5b3a0924d64345a96895663160a2c0a7e2bd17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:24 compute-0 podman[95491]: 2025-10-11 04:27:24.884298452 +0000 UTC m=+0.158722087 container init f9196eed1f67ea218a6c1f627bfc427b7e90665233a096161a45d87b9cd83372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hellman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:24 compute-0 podman[95491]: 2025-10-11 04:27:24.894207889 +0000 UTC m=+0.168631474 container start f9196eed1f67ea218a6c1f627bfc427b7e90665233a096161a45d87b9cd83372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hellman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:24 compute-0 podman[95491]: 2025-10-11 04:27:24.897554733 +0000 UTC m=+0.171978418 container attach f9196eed1f67ea218a6c1f627bfc427b7e90665233a096161a45d87b9cd83372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hellman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:25 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1395550534' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 11 04:27:25 compute-0 ceph-mon[74243]: osdmap e30: 3 total, 3 up, 3 in
Oct 11 04:27:25 compute-0 ceph-mon[74243]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 04:27:25 compute-0 python3[95587]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:27:25 compute-0 angry_hellman[95507]: {
Oct 11 04:27:25 compute-0 angry_hellman[95507]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:27:25 compute-0 angry_hellman[95507]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:25 compute-0 angry_hellman[95507]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:27:25 compute-0 angry_hellman[95507]:         "osd_id": 1,
Oct 11 04:27:25 compute-0 angry_hellman[95507]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:27:25 compute-0 angry_hellman[95507]:         "type": "bluestore"
Oct 11 04:27:25 compute-0 angry_hellman[95507]:     },
Oct 11 04:27:25 compute-0 angry_hellman[95507]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:27:25 compute-0 angry_hellman[95507]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:25 compute-0 angry_hellman[95507]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:27:25 compute-0 angry_hellman[95507]:         "osd_id": 0,
Oct 11 04:27:25 compute-0 angry_hellman[95507]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:27:25 compute-0 angry_hellman[95507]:         "type": "bluestore"
Oct 11 04:27:25 compute-0 angry_hellman[95507]:     },
Oct 11 04:27:25 compute-0 angry_hellman[95507]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:27:25 compute-0 angry_hellman[95507]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:25 compute-0 angry_hellman[95507]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:27:25 compute-0 angry_hellman[95507]:         "osd_id": 2,
Oct 11 04:27:25 compute-0 angry_hellman[95507]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:27:25 compute-0 angry_hellman[95507]:         "type": "bluestore"
Oct 11 04:27:25 compute-0 angry_hellman[95507]:     }
Oct 11 04:27:25 compute-0 angry_hellman[95507]: }
Oct 11 04:27:25 compute-0 systemd[1]: libpod-f9196eed1f67ea218a6c1f627bfc427b7e90665233a096161a45d87b9cd83372.scope: Deactivated successfully.
Oct 11 04:27:25 compute-0 podman[95491]: 2025-10-11 04:27:25.861750555 +0000 UTC m=+1.136174180 container died f9196eed1f67ea218a6c1f627bfc427b7e90665233a096161a45d87b9cd83372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hellman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaaf742fe6ef2e9d7b1aa44b1b5b3a0924d64345a96895663160a2c0a7e2bd17-merged.mount: Deactivated successfully.
Oct 11 04:27:25 compute-0 python3[95678]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760156845.2175508-33110-30571023393976/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:25 compute-0 podman[95491]: 2025-10-11 04:27:25.920540851 +0000 UTC m=+1.194964436 container remove f9196eed1f67ea218a6c1f627bfc427b7e90665233a096161a45d87b9cd83372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:25 compute-0 systemd[1]: libpod-conmon-f9196eed1f67ea218a6c1f627bfc427b7e90665233a096161a45d87b9cd83372.scope: Deactivated successfully.
Oct 11 04:27:25 compute-0 sudo[95372]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:27:25 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:27:25 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:26 compute-0 sudo[95706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:26 compute-0 sudo[95706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:26 compute-0 sudo[95706]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:26 compute-0 sudo[95749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:27:26 compute-0 sudo[95749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:26 compute-0 sudo[95749]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:27:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:27:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:27:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:27:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:27:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:27:26 compute-0 sudo[95849]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azjmutkpkurajftloooeimjimioaebjj ; /usr/bin/python3'
Oct 11 04:27:26 compute-0 sudo[95849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:26 compute-0 python3[95851]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:27:26 compute-0 sudo[95849]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:26 compute-0 sudo[95924]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuecvczuldxjjfemovfprifsvkwmvcsj ; /usr/bin/python3'
Oct 11 04:27:26 compute-0 sudo[95924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:26 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:26 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:26 compute-0 ceph-mon[74243]: pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:26 compute-0 ceph-mon[74243]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 11 04:27:26 compute-0 ceph-mon[74243]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 11 04:27:27 compute-0 python3[95926]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760156846.268408-33124-279810741255362/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=59d3bf29763fd1e10cf67be615f5d84b8c284ab0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:27 compute-0 sudo[95924]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:27 compute-0 sudo[95974]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdvkxavflqnudrzpfhcrphpolfmywzzw ; /usr/bin/python3'
Oct 11 04:27:27 compute-0 sudo[95974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:27 compute-0 python3[95976]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:27 compute-0 podman[95977]: 2025-10-11 04:27:27.615292322 +0000 UTC m=+0.074758295 container create a08ea69cf3eebb29271033905b4143f95483b99b1f2f2ccf4038a4d0fbd75690 (image=quay.io/ceph/ceph:v18, name=vigilant_chaplygin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:27:27 compute-0 systemd[1]: Started libpod-conmon-a08ea69cf3eebb29271033905b4143f95483b99b1f2f2ccf4038a4d0fbd75690.scope.
Oct 11 04:27:27 compute-0 podman[95977]: 2025-10-11 04:27:27.580953506 +0000 UTC m=+0.040419529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:27 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46cad7cab9ca2f06e1daa7073613d0e14204564d936eadb7a3fc507fff05473/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46cad7cab9ca2f06e1daa7073613d0e14204564d936eadb7a3fc507fff05473/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46cad7cab9ca2f06e1daa7073613d0e14204564d936eadb7a3fc507fff05473/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:27 compute-0 podman[95977]: 2025-10-11 04:27:27.708922055 +0000 UTC m=+0.168388058 container init a08ea69cf3eebb29271033905b4143f95483b99b1f2f2ccf4038a4d0fbd75690 (image=quay.io/ceph/ceph:v18, name=vigilant_chaplygin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:27 compute-0 podman[95977]: 2025-10-11 04:27:27.719453468 +0000 UTC m=+0.178919471 container start a08ea69cf3eebb29271033905b4143f95483b99b1f2f2ccf4038a4d0fbd75690 (image=quay.io/ceph/ceph:v18, name=vigilant_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:27 compute-0 podman[95977]: 2025-10-11 04:27:27.723466028 +0000 UTC m=+0.182932011 container attach a08ea69cf3eebb29271033905b4143f95483b99b1f2f2ccf4038a4d0fbd75690 (image=quay.io/ceph/ceph:v18, name=vigilant_chaplygin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:27:27 compute-0 ceph-mon[74243]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 11 04:27:27 compute-0 ceph-mon[74243]: Cluster is now healthy
Oct 11 04:27:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 11 04:27:28 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3565616168' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 11 04:27:28 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3565616168' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 11 04:27:28 compute-0 vigilant_chaplygin[95992]: 
Oct 11 04:27:28 compute-0 vigilant_chaplygin[95992]: [global]
Oct 11 04:27:28 compute-0 vigilant_chaplygin[95992]:         fsid = 166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:27:28 compute-0 vigilant_chaplygin[95992]:         mon_host = 192.168.122.100
Oct 11 04:27:28 compute-0 systemd[1]: libpod-a08ea69cf3eebb29271033905b4143f95483b99b1f2f2ccf4038a4d0fbd75690.scope: Deactivated successfully.
Oct 11 04:27:28 compute-0 podman[95977]: 2025-10-11 04:27:28.299170177 +0000 UTC m=+0.758636150 container died a08ea69cf3eebb29271033905b4143f95483b99b1f2f2ccf4038a4d0fbd75690 (image=quay.io/ceph/ceph:v18, name=vigilant_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 04:27:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c46cad7cab9ca2f06e1daa7073613d0e14204564d936eadb7a3fc507fff05473-merged.mount: Deactivated successfully.
Oct 11 04:27:28 compute-0 podman[95977]: 2025-10-11 04:27:28.34664243 +0000 UTC m=+0.806108393 container remove a08ea69cf3eebb29271033905b4143f95483b99b1f2f2ccf4038a4d0fbd75690 (image=quay.io/ceph/ceph:v18, name=vigilant_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:28 compute-0 systemd[1]: libpod-conmon-a08ea69cf3eebb29271033905b4143f95483b99b1f2f2ccf4038a4d0fbd75690.scope: Deactivated successfully.
Oct 11 04:27:28 compute-0 sudo[96017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:28 compute-0 sudo[96017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:28 compute-0 sudo[95974]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:28 compute-0 sudo[96017]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:28 compute-0 sudo[96054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:28 compute-0 sudo[96054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:28 compute-0 sudo[96054]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:28 compute-0 sudo[96080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:28 compute-0 sudo[96125]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgjeebavzqsoafdoajolktmbneqaoeds ; /usr/bin/python3'
Oct 11 04:27:28 compute-0 sudo[96080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:28 compute-0 sudo[96125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:28 compute-0 sudo[96080]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:28 compute-0 sudo[96130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 04:27:28 compute-0 sudo[96130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:28 compute-0 python3[96129]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:28 compute-0 podman[96155]: 2025-10-11 04:27:28.721659338 +0000 UTC m=+0.056510390 container create e15c6dd1d7ebbb3f5c314642257f8bd1c2e219f7148e7792334bea7ca91feb46 (image=quay.io/ceph/ceph:v18, name=quizzical_gagarin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:28 compute-0 systemd[1]: Started libpod-conmon-e15c6dd1d7ebbb3f5c314642257f8bd1c2e219f7148e7792334bea7ca91feb46.scope.
Oct 11 04:27:28 compute-0 podman[96155]: 2025-10-11 04:27:28.70330195 +0000 UTC m=+0.038153042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894af050d3e0aa0ca7bca9769a066db955eb0df27680e1efb019db41dc048b42/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894af050d3e0aa0ca7bca9769a066db955eb0df27680e1efb019db41dc048b42/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894af050d3e0aa0ca7bca9769a066db955eb0df27680e1efb019db41dc048b42/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:28 compute-0 podman[96155]: 2025-10-11 04:27:28.832971822 +0000 UTC m=+0.167822954 container init e15c6dd1d7ebbb3f5c314642257f8bd1c2e219f7148e7792334bea7ca91feb46 (image=quay.io/ceph/ceph:v18, name=quizzical_gagarin, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:28 compute-0 podman[96155]: 2025-10-11 04:27:28.844171551 +0000 UTC m=+0.179022613 container start e15c6dd1d7ebbb3f5c314642257f8bd1c2e219f7148e7792334bea7ca91feb46 (image=quay.io/ceph/ceph:v18, name=quizzical_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:28 compute-0 podman[96155]: 2025-10-11 04:27:28.848198522 +0000 UTC m=+0.183049664 container attach e15c6dd1d7ebbb3f5c314642257f8bd1c2e219f7148e7792334bea7ca91feb46 (image=quay.io/ceph/ceph:v18, name=quizzical_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:27:28 compute-0 ceph-mon[74243]: pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:28 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3565616168' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 11 04:27:28 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3565616168' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 11 04:27:29 compute-0 podman[96246]: 2025-10-11 04:27:29.138025456 +0000 UTC m=+0.065119764 container exec 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:27:29 compute-0 podman[96246]: 2025-10-11 04:27:29.224093481 +0000 UTC m=+0.151187789 container exec_died 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Oct 11 04:27:29 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2526813234' entity='client.admin' 
Oct 11 04:27:29 compute-0 quizzical_gagarin[96181]: set ssl_option
Oct 11 04:27:29 compute-0 systemd[1]: libpod-e15c6dd1d7ebbb3f5c314642257f8bd1c2e219f7148e7792334bea7ca91feb46.scope: Deactivated successfully.
Oct 11 04:27:29 compute-0 podman[96155]: 2025-10-11 04:27:29.478188684 +0000 UTC m=+0.813039736 container died e15c6dd1d7ebbb3f5c314642257f8bd1c2e219f7148e7792334bea7ca91feb46 (image=quay.io/ceph/ceph:v18, name=quizzical_gagarin, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:27:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-894af050d3e0aa0ca7bca9769a066db955eb0df27680e1efb019db41dc048b42-merged.mount: Deactivated successfully.
Oct 11 04:27:29 compute-0 podman[96155]: 2025-10-11 04:27:29.542325713 +0000 UTC m=+0.877176795 container remove e15c6dd1d7ebbb3f5c314642257f8bd1c2e219f7148e7792334bea7ca91feb46 (image=quay.io/ceph/ceph:v18, name=quizzical_gagarin, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:27:29 compute-0 systemd[1]: libpod-conmon-e15c6dd1d7ebbb3f5c314642257f8bd1c2e219f7148e7792334bea7ca91feb46.scope: Deactivated successfully.
Oct 11 04:27:29 compute-0 sudo[96125]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:29 compute-0 sudo[96420]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddnsgextwhjuilebaglufcjckttnisew ; /usr/bin/python3'
Oct 11 04:27:29 compute-0 sudo[96420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:29 compute-0 sudo[96130]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:27:29 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:27:29 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:27:29 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:27:29 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:27:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:27:29 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:29 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 4896d8c7-2b38-4661-aa3d-1cd15ae7ab4e does not exist
Oct 11 04:27:29 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 25dc2b9c-5ccb-431e-8737-1c363751251a does not exist
Oct 11 04:27:29 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev dbc242e6-022b-4a73-be70-5505d15f8a9f does not exist
Oct 11 04:27:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:27:29 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:27:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:27:29 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:27:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:27:29 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:29 compute-0 sudo[96430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:29 compute-0 sudo[96430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:29 compute-0 sudo[96430]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:29 compute-0 python3[96429]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:29 compute-0 sudo[96455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:29 compute-0 sudo[96455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:29 compute-0 sudo[96455]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:29 compute-0 podman[96456]: 2025-10-11 04:27:29.981467668 +0000 UTC m=+0.060818066 container create 4dc5ec6771aac3d848ec94399f6e86549137d50e1dabefdc14a9d040e18d96a0 (image=quay.io/ceph/ceph:v18, name=eloquent_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:27:30 compute-0 systemd[1]: Started libpod-conmon-4dc5ec6771aac3d848ec94399f6e86549137d50e1dabefdc14a9d040e18d96a0.scope.
Oct 11 04:27:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:30 compute-0 podman[96456]: 2025-10-11 04:27:29.951103762 +0000 UTC m=+0.030454230 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:30 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daba5ed22a5b1e52d566a96f2fd5fb1063ccf3462634b824bc544408d5e65475/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daba5ed22a5b1e52d566a96f2fd5fb1063ccf3462634b824bc544408d5e65475/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daba5ed22a5b1e52d566a96f2fd5fb1063ccf3462634b824bc544408d5e65475/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:30 compute-0 sudo[96493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:30 compute-0 sudo[96493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:30 compute-0 sudo[96493]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:30 compute-0 podman[96456]: 2025-10-11 04:27:30.095089371 +0000 UTC m=+0.174439819 container init 4dc5ec6771aac3d848ec94399f6e86549137d50e1dabefdc14a9d040e18d96a0 (image=quay.io/ceph/ceph:v18, name=eloquent_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 04:27:30 compute-0 podman[96456]: 2025-10-11 04:27:30.106446344 +0000 UTC m=+0.185796742 container start 4dc5ec6771aac3d848ec94399f6e86549137d50e1dabefdc14a9d040e18d96a0 (image=quay.io/ceph/ceph:v18, name=eloquent_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:30 compute-0 podman[96456]: 2025-10-11 04:27:30.116420362 +0000 UTC m=+0.195770790 container attach 4dc5ec6771aac3d848ec94399f6e86549137d50e1dabefdc14a9d040e18d96a0 (image=quay.io/ceph/ceph:v18, name=eloquent_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:27:30 compute-0 sudo[96524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:27:30 compute-0 sudo[96524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:30 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2526813234' entity='client.admin' 
Oct 11 04:27:30 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:30 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:30 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:30 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:27:30 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:30 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:27:30 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:27:30 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:30 compute-0 ceph-mon[74243]: pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:30 compute-0 podman[96603]: 2025-10-11 04:27:30.593190356 +0000 UTC m=+0.046511921 container create 41df8862435ff57b9c3ac3c26095dc10cd03f59740ac09f2f3c64c528bc45b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:27:30 compute-0 systemd[1]: Started libpod-conmon-41df8862435ff57b9c3ac3c26095dc10cd03f59740ac09f2f3c64c528bc45b2f.scope.
Oct 11 04:27:30 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:30 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:27:30 compute-0 ceph-mgr[74542]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Oct 11 04:27:30 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct 11 04:27:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 11 04:27:30 compute-0 podman[96603]: 2025-10-11 04:27:30.569755332 +0000 UTC m=+0.023076977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:30 compute-0 podman[96603]: 2025-10-11 04:27:30.680011269 +0000 UTC m=+0.133332884 container init 41df8862435ff57b9c3ac3c26095dc10cd03f59740ac09f2f3c64c528bc45b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:30 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:30 compute-0 eloquent_babbage[96515]: Scheduled rgw.rgw update...
Oct 11 04:27:30 compute-0 podman[96603]: 2025-10-11 04:27:30.689593718 +0000 UTC m=+0.142915283 container start 41df8862435ff57b9c3ac3c26095dc10cd03f59740ac09f2f3c64c528bc45b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:30 compute-0 podman[96603]: 2025-10-11 04:27:30.693119035 +0000 UTC m=+0.146440620 container attach 41df8862435ff57b9c3ac3c26095dc10cd03f59740ac09f2f3c64c528bc45b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:27:30 compute-0 distracted_khayyam[96620]: 167 167
Oct 11 04:27:30 compute-0 systemd[1]: libpod-41df8862435ff57b9c3ac3c26095dc10cd03f59740ac09f2f3c64c528bc45b2f.scope: Deactivated successfully.
Oct 11 04:27:30 compute-0 podman[96603]: 2025-10-11 04:27:30.695781642 +0000 UTC m=+0.149103267 container died 41df8862435ff57b9c3ac3c26095dc10cd03f59740ac09f2f3c64c528bc45b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 04:27:30 compute-0 systemd[1]: libpod-4dc5ec6771aac3d848ec94399f6e86549137d50e1dabefdc14a9d040e18d96a0.scope: Deactivated successfully.
Oct 11 04:27:30 compute-0 podman[96456]: 2025-10-11 04:27:30.706635172 +0000 UTC m=+0.785985570 container died 4dc5ec6771aac3d848ec94399f6e86549137d50e1dabefdc14a9d040e18d96a0 (image=quay.io/ceph/ceph:v18, name=eloquent_babbage, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:27:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-daba5ed22a5b1e52d566a96f2fd5fb1063ccf3462634b824bc544408d5e65475-merged.mount: Deactivated successfully.
Oct 11 04:27:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-869bc53c6fc5bedac8fa2aa3a1be72dd49dd4abb2620f6bbc5d8fe9ec85e84b1-merged.mount: Deactivated successfully.
Oct 11 04:27:30 compute-0 podman[96603]: 2025-10-11 04:27:30.7555068 +0000 UTC m=+0.208828405 container remove 41df8862435ff57b9c3ac3c26095dc10cd03f59740ac09f2f3c64c528bc45b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:30 compute-0 systemd[1]: libpod-conmon-41df8862435ff57b9c3ac3c26095dc10cd03f59740ac09f2f3c64c528bc45b2f.scope: Deactivated successfully.
Oct 11 04:27:30 compute-0 podman[96456]: 2025-10-11 04:27:30.786263927 +0000 UTC m=+0.865614285 container remove 4dc5ec6771aac3d848ec94399f6e86549137d50e1dabefdc14a9d040e18d96a0 (image=quay.io/ceph/ceph:v18, name=eloquent_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:30 compute-0 systemd[1]: libpod-conmon-4dc5ec6771aac3d848ec94399f6e86549137d50e1dabefdc14a9d040e18d96a0.scope: Deactivated successfully.
Oct 11 04:27:30 compute-0 sudo[96420]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:30 compute-0 podman[96655]: 2025-10-11 04:27:30.974694474 +0000 UTC m=+0.056774716 container create 1e461916bce1e62556a870d252ec9107d54bc49be729050f3dcaa4690ec7473c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_moser, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:27:31 compute-0 systemd[1]: Started libpod-conmon-1e461916bce1e62556a870d252ec9107d54bc49be729050f3dcaa4690ec7473c.scope.
Oct 11 04:27:31 compute-0 podman[96655]: 2025-10-11 04:27:30.94847931 +0000 UTC m=+0.030559582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33ab51a4710fd38225f0722c7cddeb13944bb14efa2260da7a9ecda6db692c14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33ab51a4710fd38225f0722c7cddeb13944bb14efa2260da7a9ecda6db692c14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33ab51a4710fd38225f0722c7cddeb13944bb14efa2260da7a9ecda6db692c14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33ab51a4710fd38225f0722c7cddeb13944bb14efa2260da7a9ecda6db692c14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33ab51a4710fd38225f0722c7cddeb13944bb14efa2260da7a9ecda6db692c14/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:31 compute-0 podman[96655]: 2025-10-11 04:27:31.070281426 +0000 UTC m=+0.152361718 container init 1e461916bce1e62556a870d252ec9107d54bc49be729050f3dcaa4690ec7473c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_moser, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:27:31 compute-0 podman[96655]: 2025-10-11 04:27:31.082900591 +0000 UTC m=+0.164980823 container start 1e461916bce1e62556a870d252ec9107d54bc49be729050f3dcaa4690ec7473c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:27:31 compute-0 podman[96655]: 2025-10-11 04:27:31.087509896 +0000 UTC m=+0.169590188 container attach 1e461916bce1e62556a870d252ec9107d54bc49be729050f3dcaa4690ec7473c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 04:27:31 compute-0 ceph-mon[74243]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:27:31 compute-0 ceph-mon[74243]: Saving service rgw.rgw spec with placement compute-0
Oct 11 04:27:31 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:31 compute-0 python3[96756]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:27:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:32 compute-0 admiring_moser[96672]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:27:32 compute-0 admiring_moser[96672]: --> relative data size: 1.0
Oct 11 04:27:32 compute-0 admiring_moser[96672]: --> All data devices are unavailable
Oct 11 04:27:32 compute-0 systemd[1]: libpod-1e461916bce1e62556a870d252ec9107d54bc49be729050f3dcaa4690ec7473c.scope: Deactivated successfully.
Oct 11 04:27:32 compute-0 podman[96655]: 2025-10-11 04:27:32.261694722 +0000 UTC m=+1.343774964 container died 1e461916bce1e62556a870d252ec9107d54bc49be729050f3dcaa4690ec7473c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_moser, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:27:32 compute-0 systemd[1]: libpod-1e461916bce1e62556a870d252ec9107d54bc49be729050f3dcaa4690ec7473c.scope: Consumed 1.123s CPU time.
Oct 11 04:27:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-33ab51a4710fd38225f0722c7cddeb13944bb14efa2260da7a9ecda6db692c14-merged.mount: Deactivated successfully.
Oct 11 04:27:32 compute-0 python3[96845]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760156851.5820549-33165-199862041945825/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:32 compute-0 podman[96655]: 2025-10-11 04:27:32.342510137 +0000 UTC m=+1.424590379 container remove 1e461916bce1e62556a870d252ec9107d54bc49be729050f3dcaa4690ec7473c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:32 compute-0 systemd[1]: libpod-conmon-1e461916bce1e62556a870d252ec9107d54bc49be729050f3dcaa4690ec7473c.scope: Deactivated successfully.
Oct 11 04:27:32 compute-0 sudo[96524]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:32 compute-0 sudo[96865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:32 compute-0 sudo[96865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:32 compute-0 sudo[96865]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:32 compute-0 sudo[96909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:32 compute-0 sudo[96909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:32 compute-0 sudo[96909]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:32 compute-0 sudo[96934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:32 compute-0 sudo[96934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:32 compute-0 sudo[96934]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:32 compute-0 sudo[96959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:27:32 compute-0 sudo[96959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:32 compute-0 ceph-mon[74243]: pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:32 compute-0 sudo[97007]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezgsxnbqclcavvdfydzvftandcuabqls ; /usr/bin/python3'
Oct 11 04:27:32 compute-0 sudo[97007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:32 compute-0 python3[97009]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:32 compute-0 podman[97035]: 2025-10-11 04:27:32.936984924 +0000 UTC m=+0.074244252 container create 836cdfe206adbb6e4cafca4da44edae697cfe5411724dbd61c3e0307690ada76 (image=quay.io/ceph/ceph:v18, name=flamboyant_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:32 compute-0 systemd[1]: Started libpod-conmon-836cdfe206adbb6e4cafca4da44edae697cfe5411724dbd61c3e0307690ada76.scope.
Oct 11 04:27:33 compute-0 podman[97035]: 2025-10-11 04:27:32.909579041 +0000 UTC m=+0.046838439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06c43fdef44175348b35079a7e28030eb48eb651dc1900c2c241ef7280aee2c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06c43fdef44175348b35079a7e28030eb48eb651dc1900c2c241ef7280aee2c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06c43fdef44175348b35079a7e28030eb48eb651dc1900c2c241ef7280aee2c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:33 compute-0 podman[97035]: 2025-10-11 04:27:33.036686999 +0000 UTC m=+0.173946347 container init 836cdfe206adbb6e4cafca4da44edae697cfe5411724dbd61c3e0307690ada76 (image=quay.io/ceph/ceph:v18, name=flamboyant_ganguly, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:27:33 compute-0 podman[97067]: 2025-10-11 04:27:33.041891009 +0000 UTC m=+0.053780162 container create 9ccbaea8c0b78af0d237bff73c087083f97802b7e655bce0474a3f391b2e6632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jones, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:27:33 compute-0 podman[97035]: 2025-10-11 04:27:33.050921454 +0000 UTC m=+0.188180782 container start 836cdfe206adbb6e4cafca4da44edae697cfe5411724dbd61c3e0307690ada76 (image=quay.io/ceph/ceph:v18, name=flamboyant_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 04:27:33 compute-0 podman[97035]: 2025-10-11 04:27:33.057773555 +0000 UTC m=+0.195032893 container attach 836cdfe206adbb6e4cafca4da44edae697cfe5411724dbd61c3e0307690ada76 (image=quay.io/ceph/ceph:v18, name=flamboyant_ganguly, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:33 compute-0 systemd[1]: Started libpod-conmon-9ccbaea8c0b78af0d237bff73c087083f97802b7e655bce0474a3f391b2e6632.scope.
Oct 11 04:27:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:33 compute-0 podman[97067]: 2025-10-11 04:27:33.013025749 +0000 UTC m=+0.024914912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:33 compute-0 podman[97067]: 2025-10-11 04:27:33.12297652 +0000 UTC m=+0.134865683 container init 9ccbaea8c0b78af0d237bff73c087083f97802b7e655bce0474a3f391b2e6632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:27:33 compute-0 podman[97067]: 2025-10-11 04:27:33.13343719 +0000 UTC m=+0.145326343 container start 9ccbaea8c0b78af0d237bff73c087083f97802b7e655bce0474a3f391b2e6632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jones, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:27:33 compute-0 podman[97067]: 2025-10-11 04:27:33.137323667 +0000 UTC m=+0.149212800 container attach 9ccbaea8c0b78af0d237bff73c087083f97802b7e655bce0474a3f391b2e6632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jones, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:33 compute-0 crazy_jones[97088]: 167 167
Oct 11 04:27:33 compute-0 systemd[1]: libpod-9ccbaea8c0b78af0d237bff73c087083f97802b7e655bce0474a3f391b2e6632.scope: Deactivated successfully.
Oct 11 04:27:33 compute-0 podman[97067]: 2025-10-11 04:27:33.139305797 +0000 UTC m=+0.151194930 container died 9ccbaea8c0b78af0d237bff73c087083f97802b7e655bce0474a3f391b2e6632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jones, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:27:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cbafad22c08b1ec9f2cb701c2caeec09002e1d09ce9c464b4a13b74cd316847-merged.mount: Deactivated successfully.
Oct 11 04:27:33 compute-0 podman[97067]: 2025-10-11 04:27:33.194989845 +0000 UTC m=+0.206878998 container remove 9ccbaea8c0b78af0d237bff73c087083f97802b7e655bce0474a3f391b2e6632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jones, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:27:33 compute-0 systemd[1]: libpod-conmon-9ccbaea8c0b78af0d237bff73c087083f97802b7e655bce0474a3f391b2e6632.scope: Deactivated successfully.
Oct 11 04:27:33 compute-0 podman[97112]: 2025-10-11 04:27:33.368312055 +0000 UTC m=+0.038347347 container create d9cddf33bc1897ed490adf9d27de791944066f378f63cb574c2d9da3103ccf40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:33 compute-0 systemd[1]: Started libpod-conmon-d9cddf33bc1897ed490adf9d27de791944066f378f63cb574c2d9da3103ccf40.scope.
Oct 11 04:27:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5479a90f16337fcdec4ce1e34408d06a8c868f1080d45eac0e91df7a96c4dc83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5479a90f16337fcdec4ce1e34408d06a8c868f1080d45eac0e91df7a96c4dc83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5479a90f16337fcdec4ce1e34408d06a8c868f1080d45eac0e91df7a96c4dc83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5479a90f16337fcdec4ce1e34408d06a8c868f1080d45eac0e91df7a96c4dc83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:33 compute-0 podman[97112]: 2025-10-11 04:27:33.349882765 +0000 UTC m=+0.019918067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:33 compute-0 podman[97112]: 2025-10-11 04:27:33.450800821 +0000 UTC m=+0.120836143 container init d9cddf33bc1897ed490adf9d27de791944066f378f63cb574c2d9da3103ccf40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lovelace, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:33 compute-0 podman[97112]: 2025-10-11 04:27:33.457622061 +0000 UTC m=+0.127657343 container start d9cddf33bc1897ed490adf9d27de791944066f378f63cb574c2d9da3103ccf40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:33 compute-0 podman[97112]: 2025-10-11 04:27:33.461812875 +0000 UTC m=+0.131848187 container attach d9cddf33bc1897ed490adf9d27de791944066f378f63cb574c2d9da3103ccf40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lovelace, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:33 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:27:33 compute-0 ceph-mgr[74542]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 11 04:27:33 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Oct 11 04:27:33 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 11 04:27:33 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Oct 11 04:27:33 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 11 04:27:33 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Oct 11 04:27:33 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 11 04:27:33 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct 11 04:27:33 compute-0 ceph-mon[74243]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 11 04:27:33 compute-0 ceph-mon[74243]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 11 04:27:33 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0[74239]: 2025-10-11T04:27:33.570+0000 7fea2eb85640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 11 04:27:33 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 11 04:27:33 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).mds e2 new map
Oct 11 04:27:33 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-11T04:27:33.571866+0000
                                           modified        2025-10-11T04:27:33.571921+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Oct 11 04:27:33 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Oct 11 04:27:33 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Oct 11 04:27:33 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct 11 04:27:33 compute-0 ceph-mgr[74542]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct 11 04:27:33 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct 11 04:27:33 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 11 04:27:33 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:33 compute-0 ceph-mgr[74542]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 11 04:27:33 compute-0 systemd[1]: libpod-836cdfe206adbb6e4cafca4da44edae697cfe5411724dbd61c3e0307690ada76.scope: Deactivated successfully.
Oct 11 04:27:33 compute-0 podman[97035]: 2025-10-11 04:27:33.607324752 +0000 UTC m=+0.744584100 container died 836cdfe206adbb6e4cafca4da44edae697cfe5411724dbd61c3e0307690ada76 (image=quay.io/ceph/ceph:v18, name=flamboyant_ganguly, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b06c43fdef44175348b35079a7e28030eb48eb651dc1900c2c241ef7280aee2c-merged.mount: Deactivated successfully.
Oct 11 04:27:33 compute-0 podman[97035]: 2025-10-11 04:27:33.652019136 +0000 UTC m=+0.789278464 container remove 836cdfe206adbb6e4cafca4da44edae697cfe5411724dbd61c3e0307690ada76 (image=quay.io/ceph/ceph:v18, name=flamboyant_ganguly, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:33 compute-0 systemd[1]: libpod-conmon-836cdfe206adbb6e4cafca4da44edae697cfe5411724dbd61c3e0307690ada76.scope: Deactivated successfully.
Oct 11 04:27:33 compute-0 sudo[97007]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:33 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 11 04:27:33 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 11 04:27:33 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 11 04:27:33 compute-0 ceph-mon[74243]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 11 04:27:33 compute-0 ceph-mon[74243]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 11 04:27:33 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 11 04:27:33 compute-0 ceph-mon[74243]: osdmap e31: 3 total, 3 up, 3 in
Oct 11 04:27:33 compute-0 ceph-mon[74243]: fsmap cephfs:0
Oct 11 04:27:33 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:33 compute-0 sudo[97189]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxpmdaulaaahjgugljlgsuqjwabcpeua ; /usr/bin/python3'
Oct 11 04:27:33 compute-0 sudo[97189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:33 compute-0 python3[97191]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:34 compute-0 podman[97192]: 2025-10-11 04:27:34.042182341 +0000 UTC m=+0.053838443 container create dd37e8856ed2fe7841d632ef3ac495463d940ec7fef48b63d872ae50c74d7843 (image=quay.io/ceph/ceph:v18, name=hopeful_shaw, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:34 compute-0 systemd[1]: Started libpod-conmon-dd37e8856ed2fe7841d632ef3ac495463d940ec7fef48b63d872ae50c74d7843.scope.
Oct 11 04:27:34 compute-0 podman[97192]: 2025-10-11 04:27:34.011197059 +0000 UTC m=+0.022853211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d93b43fb3a9c2040635bfb68b865cd8dbf0a0f7ae8e91d6af5e0484f9794a0f8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d93b43fb3a9c2040635bfb68b865cd8dbf0a0f7ae8e91d6af5e0484f9794a0f8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d93b43fb3a9c2040635bfb68b865cd8dbf0a0f7ae8e91d6af5e0484f9794a0f8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:34 compute-0 podman[97192]: 2025-10-11 04:27:34.144800969 +0000 UTC m=+0.156457071 container init dd37e8856ed2fe7841d632ef3ac495463d940ec7fef48b63d872ae50c74d7843 (image=quay.io/ceph/ceph:v18, name=hopeful_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:27:34 compute-0 podman[97192]: 2025-10-11 04:27:34.15649708 +0000 UTC m=+0.168153152 container start dd37e8856ed2fe7841d632ef3ac495463d940ec7fef48b63d872ae50c74d7843 (image=quay.io/ceph/ceph:v18, name=hopeful_shaw, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:27:34 compute-0 podman[97192]: 2025-10-11 04:27:34.160028928 +0000 UTC m=+0.171685030 container attach dd37e8856ed2fe7841d632ef3ac495463d940ec7fef48b63d872ae50c74d7843 (image=quay.io/ceph/ceph:v18, name=hopeful_shaw, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]: {
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:     "0": [
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:         {
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "devices": [
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "/dev/loop3"
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             ],
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "lv_name": "ceph_lv0",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "lv_size": "21470642176",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "name": "ceph_lv0",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "tags": {
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.cluster_name": "ceph",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.crush_device_class": "",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.encrypted": "0",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.osd_id": "0",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.type": "block",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.vdo": "0"
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             },
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "type": "block",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "vg_name": "ceph_vg0"
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:         }
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:     ],
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:     "1": [
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:         {
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "devices": [
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "/dev/loop4"
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             ],
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "lv_name": "ceph_lv1",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "lv_size": "21470642176",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "name": "ceph_lv1",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "tags": {
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.cluster_name": "ceph",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.crush_device_class": "",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.encrypted": "0",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.osd_id": "1",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.type": "block",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.vdo": "0"
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             },
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "type": "block",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "vg_name": "ceph_vg1"
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:         }
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:     ],
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:     "2": [
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:         {
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "devices": [
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "/dev/loop5"
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             ],
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "lv_name": "ceph_lv2",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "lv_size": "21470642176",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "name": "ceph_lv2",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "tags": {
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.cluster_name": "ceph",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.crush_device_class": "",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.encrypted": "0",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.osd_id": "2",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.type": "block",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:                 "ceph.vdo": "0"
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             },
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "type": "block",
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:             "vg_name": "ceph_vg2"
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:         }
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]:     ]
Oct 11 04:27:34 compute-0 naughty_lovelace[97148]: }
Oct 11 04:27:34 compute-0 systemd[1]: libpod-d9cddf33bc1897ed490adf9d27de791944066f378f63cb574c2d9da3103ccf40.scope: Deactivated successfully.
Oct 11 04:27:34 compute-0 podman[97112]: 2025-10-11 04:27:34.24436433 +0000 UTC m=+0.914399602 container died d9cddf33bc1897ed490adf9d27de791944066f378f63cb574c2d9da3103ccf40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lovelace, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:27:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5479a90f16337fcdec4ce1e34408d06a8c868f1080d45eac0e91df7a96c4dc83-merged.mount: Deactivated successfully.
Oct 11 04:27:34 compute-0 podman[97112]: 2025-10-11 04:27:34.29575418 +0000 UTC m=+0.965789462 container remove d9cddf33bc1897ed490adf9d27de791944066f378f63cb574c2d9da3103ccf40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 04:27:34 compute-0 systemd[1]: libpod-conmon-d9cddf33bc1897ed490adf9d27de791944066f378f63cb574c2d9da3103ccf40.scope: Deactivated successfully.
Oct 11 04:27:34 compute-0 sudo[96959]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:34 compute-0 sudo[97226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:34 compute-0 sudo[97226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:34 compute-0 sudo[97226]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:34 compute-0 sudo[97251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:34 compute-0 sudo[97251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:34 compute-0 sudo[97251]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:34 compute-0 sudo[97276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:34 compute-0 sudo[97276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:34 compute-0 sudo[97276]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:34 compute-0 sudo[97320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:27:34 compute-0 sudo[97320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:34 compute-0 ceph-mon[74243]: from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:27:34 compute-0 ceph-mon[74243]: Saving service mds.cephfs spec with placement compute-0
Oct 11 04:27:34 compute-0 ceph-mon[74243]: pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:34 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:27:34 compute-0 ceph-mgr[74542]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct 11 04:27:34 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct 11 04:27:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 11 04:27:34 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:34 compute-0 hopeful_shaw[97207]: Scheduled mds.cephfs update...
Oct 11 04:27:34 compute-0 systemd[1]: libpod-dd37e8856ed2fe7841d632ef3ac495463d940ec7fef48b63d872ae50c74d7843.scope: Deactivated successfully.
Oct 11 04:27:34 compute-0 podman[97192]: 2025-10-11 04:27:34.755091159 +0000 UTC m=+0.766747271 container died dd37e8856ed2fe7841d632ef3ac495463d940ec7fef48b63d872ae50c74d7843 (image=quay.io/ceph/ceph:v18, name=hopeful_shaw, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:27:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-d93b43fb3a9c2040635bfb68b865cd8dbf0a0f7ae8e91d6af5e0484f9794a0f8-merged.mount: Deactivated successfully.
Oct 11 04:27:34 compute-0 podman[97192]: 2025-10-11 04:27:34.809823953 +0000 UTC m=+0.821480015 container remove dd37e8856ed2fe7841d632ef3ac495463d940ec7fef48b63d872ae50c74d7843 (image=quay.io/ceph/ceph:v18, name=hopeful_shaw, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:34 compute-0 systemd[1]: libpod-conmon-dd37e8856ed2fe7841d632ef3ac495463d940ec7fef48b63d872ae50c74d7843.scope: Deactivated successfully.
Oct 11 04:27:34 compute-0 sudo[97189]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:34 compute-0 podman[97396]: 2025-10-11 04:27:34.893159821 +0000 UTC m=+0.046990783 container create 6f5c47cd6ed122a71af2e7746d19ed72a1c095e6d808a484053dee8d41267b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:34 compute-0 systemd[1]: Started libpod-conmon-6f5c47cd6ed122a71af2e7746d19ed72a1c095e6d808a484053dee8d41267b8a.scope.
Oct 11 04:27:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:34 compute-0 podman[97396]: 2025-10-11 04:27:34.869443489 +0000 UTC m=+0.023274411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:34 compute-0 podman[97396]: 2025-10-11 04:27:34.96814444 +0000 UTC m=+0.121975362 container init 6f5c47cd6ed122a71af2e7746d19ed72a1c095e6d808a484053dee8d41267b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:34 compute-0 podman[97396]: 2025-10-11 04:27:34.973925514 +0000 UTC m=+0.127756476 container start 6f5c47cd6ed122a71af2e7746d19ed72a1c095e6d808a484053dee8d41267b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:34 compute-0 podman[97396]: 2025-10-11 04:27:34.978132599 +0000 UTC m=+0.131963521 container attach 6f5c47cd6ed122a71af2e7746d19ed72a1c095e6d808a484053dee8d41267b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:34 compute-0 stoic_mahavira[97412]: 167 167
Oct 11 04:27:34 compute-0 systemd[1]: libpod-6f5c47cd6ed122a71af2e7746d19ed72a1c095e6d808a484053dee8d41267b8a.scope: Deactivated successfully.
Oct 11 04:27:34 compute-0 podman[97396]: 2025-10-11 04:27:34.97941293 +0000 UTC m=+0.133243852 container died 6f5c47cd6ed122a71af2e7746d19ed72a1c095e6d808a484053dee8d41267b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:27:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e225b18e0b646adaf6a9a6b1f54eb3936d5a1f1142e3c98274beacb8d850722-merged.mount: Deactivated successfully.
Oct 11 04:27:35 compute-0 podman[97396]: 2025-10-11 04:27:35.02030349 +0000 UTC m=+0.174134402 container remove 6f5c47cd6ed122a71af2e7746d19ed72a1c095e6d808a484053dee8d41267b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:35 compute-0 systemd[1]: libpod-conmon-6f5c47cd6ed122a71af2e7746d19ed72a1c095e6d808a484053dee8d41267b8a.scope: Deactivated successfully.
Oct 11 04:27:35 compute-0 podman[97435]: 2025-10-11 04:27:35.234985341 +0000 UTC m=+0.067737140 container create b64690f70cade9ceadb7b88ba3c07396c86c7ab0dc342d384f0b608f37e4dd5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:27:35 compute-0 systemd[1]: Started libpod-conmon-b64690f70cade9ceadb7b88ba3c07396c86c7ab0dc342d384f0b608f37e4dd5e.scope.
Oct 11 04:27:35 compute-0 podman[97435]: 2025-10-11 04:27:35.207802803 +0000 UTC m=+0.040554642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81fd40b99f234d84404292c6e1d027308ce1d2a65b050594c6ac9ba4b0ce0dd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81fd40b99f234d84404292c6e1d027308ce1d2a65b050594c6ac9ba4b0ce0dd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81fd40b99f234d84404292c6e1d027308ce1d2a65b050594c6ac9ba4b0ce0dd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81fd40b99f234d84404292c6e1d027308ce1d2a65b050594c6ac9ba4b0ce0dd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:35 compute-0 podman[97435]: 2025-10-11 04:27:35.357189827 +0000 UTC m=+0.189941626 container init b64690f70cade9ceadb7b88ba3c07396c86c7ab0dc342d384f0b608f37e4dd5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wing, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:27:35 compute-0 podman[97435]: 2025-10-11 04:27:35.378411206 +0000 UTC m=+0.211162985 container start b64690f70cade9ceadb7b88ba3c07396c86c7ab0dc342d384f0b608f37e4dd5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wing, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:35 compute-0 podman[97435]: 2025-10-11 04:27:35.382386605 +0000 UTC m=+0.215138374 container attach b64690f70cade9ceadb7b88ba3c07396c86c7ab0dc342d384f0b608f37e4dd5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wing, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:27:35 compute-0 sudo[97531]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkdlgdqjxkjaovhxukaelixogehxxpvg ; /usr/bin/python3'
Oct 11 04:27:35 compute-0 sudo[97531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:35 compute-0 python3[97533]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:27:35 compute-0 sudo[97531]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:35 compute-0 ceph-mon[74243]: from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:27:35 compute-0 ceph-mon[74243]: Saving service mds.cephfs spec with placement compute-0
Oct 11 04:27:35 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:35 compute-0 sudo[97604]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saetwtbgqeaczoybhdedotwfaephrdce ; /usr/bin/python3'
Oct 11 04:27:35 compute-0 sudo[97604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:36 compute-0 python3[97606]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760156855.2832336-33195-112310578112641/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=7de10bc8d1f738e1402f9fea5caa06ca86e8a39c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:36 compute-0 sudo[97604]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:36 compute-0 exciting_wing[97474]: {
Oct 11 04:27:36 compute-0 exciting_wing[97474]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:27:36 compute-0 exciting_wing[97474]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:36 compute-0 exciting_wing[97474]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:27:36 compute-0 exciting_wing[97474]:         "osd_id": 1,
Oct 11 04:27:36 compute-0 exciting_wing[97474]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:27:36 compute-0 exciting_wing[97474]:         "type": "bluestore"
Oct 11 04:27:36 compute-0 exciting_wing[97474]:     },
Oct 11 04:27:36 compute-0 exciting_wing[97474]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:27:36 compute-0 exciting_wing[97474]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:36 compute-0 exciting_wing[97474]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:27:36 compute-0 exciting_wing[97474]:         "osd_id": 0,
Oct 11 04:27:36 compute-0 exciting_wing[97474]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:27:36 compute-0 exciting_wing[97474]:         "type": "bluestore"
Oct 11 04:27:36 compute-0 exciting_wing[97474]:     },
Oct 11 04:27:36 compute-0 exciting_wing[97474]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:27:36 compute-0 exciting_wing[97474]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:36 compute-0 exciting_wing[97474]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:27:36 compute-0 exciting_wing[97474]:         "osd_id": 2,
Oct 11 04:27:36 compute-0 exciting_wing[97474]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:27:36 compute-0 exciting_wing[97474]:         "type": "bluestore"
Oct 11 04:27:36 compute-0 exciting_wing[97474]:     }
Oct 11 04:27:36 compute-0 exciting_wing[97474]: }
Oct 11 04:27:36 compute-0 systemd[1]: libpod-b64690f70cade9ceadb7b88ba3c07396c86c7ab0dc342d384f0b608f37e4dd5e.scope: Deactivated successfully.
Oct 11 04:27:36 compute-0 systemd[1]: libpod-b64690f70cade9ceadb7b88ba3c07396c86c7ab0dc342d384f0b608f37e4dd5e.scope: Consumed 1.012s CPU time.
Oct 11 04:27:36 compute-0 podman[97435]: 2025-10-11 04:27:36.383640771 +0000 UTC m=+1.216392540 container died b64690f70cade9ceadb7b88ba3c07396c86c7ab0dc342d384f0b608f37e4dd5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 04:27:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-81fd40b99f234d84404292c6e1d027308ce1d2a65b050594c6ac9ba4b0ce0dd1-merged.mount: Deactivated successfully.
Oct 11 04:27:36 compute-0 sudo[97691]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrglzjfgyekacjkexmpkijqpwbflizyx ; /usr/bin/python3'
Oct 11 04:27:36 compute-0 sudo[97691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:36 compute-0 podman[97435]: 2025-10-11 04:27:36.457560443 +0000 UTC m=+1.290312202 container remove b64690f70cade9ceadb7b88ba3c07396c86c7ab0dc342d384f0b608f37e4dd5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:36 compute-0 systemd[1]: libpod-conmon-b64690f70cade9ceadb7b88ba3c07396c86c7ab0dc342d384f0b608f37e4dd5e.scope: Deactivated successfully.
Oct 11 04:27:36 compute-0 sudo[97320]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:36 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:27:36 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:36 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:27:36 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:36 compute-0 sudo[97697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:36 compute-0 sudo[97697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:36 compute-0 sudo[97697]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:36 compute-0 python3[97696]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:36 compute-0 sudo[97722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:27:36 compute-0 sudo[97722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:36 compute-0 sudo[97722]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:36 compute-0 podman[97743]: 2025-10-11 04:27:36.676347397 +0000 UTC m=+0.049670879 container create ce0a7306853b1537f436e31c544acc5d267c22ce83a6d06a0aab0d45dd172ee0 (image=quay.io/ceph/ceph:v18, name=charming_sutherland, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:27:36 compute-0 systemd[1]: Started libpod-conmon-ce0a7306853b1537f436e31c544acc5d267c22ce83a6d06a0aab0d45dd172ee0.scope.
Oct 11 04:27:36 compute-0 sudo[97756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:36 compute-0 sudo[97756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:36 compute-0 sudo[97756]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:36 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da06b8b3abd09057f9e6154d0a055846f4f34a5c4c2afee02e6fdaa3dfe7b4c2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da06b8b3abd09057f9e6154d0a055846f4f34a5c4c2afee02e6fdaa3dfe7b4c2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:36 compute-0 podman[97743]: 2025-10-11 04:27:36.656524833 +0000 UTC m=+0.029848315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:36 compute-0 ceph-mon[74243]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:36 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:36 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:36 compute-0 podman[97743]: 2025-10-11 04:27:36.758479424 +0000 UTC m=+0.131802876 container init ce0a7306853b1537f436e31c544acc5d267c22ce83a6d06a0aab0d45dd172ee0 (image=quay.io/ceph/ceph:v18, name=charming_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:27:36 compute-0 podman[97743]: 2025-10-11 04:27:36.764623537 +0000 UTC m=+0.137946969 container start ce0a7306853b1537f436e31c544acc5d267c22ce83a6d06a0aab0d45dd172ee0 (image=quay.io/ceph/ceph:v18, name=charming_sutherland, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:36 compute-0 podman[97743]: 2025-10-11 04:27:36.767811726 +0000 UTC m=+0.141135188 container attach ce0a7306853b1537f436e31c544acc5d267c22ce83a6d06a0aab0d45dd172ee0 (image=quay.io/ceph/ceph:v18, name=charming_sutherland, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:27:36 compute-0 sudo[97790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:36 compute-0 sudo[97790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:36 compute-0 sudo[97790]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:36 compute-0 sudo[97816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:36 compute-0 sudo[97816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:36 compute-0 sudo[97816]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:36 compute-0 sudo[97841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 04:27:36 compute-0 sudo[97841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:37 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Oct 11 04:27:37 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2470044628' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 11 04:27:37 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2470044628' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 11 04:27:37 compute-0 systemd[1]: libpod-ce0a7306853b1537f436e31c544acc5d267c22ce83a6d06a0aab0d45dd172ee0.scope: Deactivated successfully.
Oct 11 04:27:37 compute-0 podman[97743]: 2025-10-11 04:27:37.405765077 +0000 UTC m=+0.779088579 container died ce0a7306853b1537f436e31c544acc5d267c22ce83a6d06a0aab0d45dd172ee0 (image=quay.io/ceph/ceph:v18, name=charming_sutherland, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:27:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-da06b8b3abd09057f9e6154d0a055846f4f34a5c4c2afee02e6fdaa3dfe7b4c2-merged.mount: Deactivated successfully.
Oct 11 04:27:37 compute-0 podman[97743]: 2025-10-11 04:27:37.463141227 +0000 UTC m=+0.836464699 container remove ce0a7306853b1537f436e31c544acc5d267c22ce83a6d06a0aab0d45dd172ee0 (image=quay.io/ceph/ceph:v18, name=charming_sutherland, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:27:37 compute-0 systemd[1]: libpod-conmon-ce0a7306853b1537f436e31c544acc5d267c22ce83a6d06a0aab0d45dd172ee0.scope: Deactivated successfully.
Oct 11 04:27:37 compute-0 sudo[97691]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:37 compute-0 podman[97972]: 2025-10-11 04:27:37.570501323 +0000 UTC m=+0.066333444 container exec 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:37 compute-0 podman[97972]: 2025-10-11 04:27:37.678754262 +0000 UTC m=+0.174586333 container exec_died 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:27:37 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2470044628' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 11 04:27:37 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2470044628' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 11 04:27:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:38 compute-0 sudo[98112]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaugmuwovbpgilmwhcrglnkxtyhbfmor ; /usr/bin/python3'
Oct 11 04:27:38 compute-0 sudo[98112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:38 compute-0 sudo[97841]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:27:38 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:27:38 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:27:38 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:27:38 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:27:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:27:38 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:38 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 1d7d26ef-2e91-4d31-bee3-1aa0fb9f95d2 does not exist
Oct 11 04:27:38 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev e319c4d5-c595-486e-9b2c-0edb630cad5c does not exist
Oct 11 04:27:38 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 6ede9847-c089-4e16-96e2-77221c14cca5 does not exist
Oct 11 04:27:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:27:38 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:27:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:27:38 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:27:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:27:38 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:38 compute-0 sudo[98123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:38 compute-0 sudo[98123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:38 compute-0 sudo[98123]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:38 compute-0 python3[98121]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:38 compute-0 sudo[98148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:38 compute-0 sudo[98148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:38 compute-0 sudo[98148]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:38 compute-0 podman[98169]: 2025-10-11 04:27:38.281163166 +0000 UTC m=+0.036193963 container create b59d66b17267b9a2594536591a223c3d8adf626b777c46ea8b836c3a3d404a34 (image=quay.io/ceph/ceph:v18, name=modest_curran, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:38 compute-0 systemd[1]: Started libpod-conmon-b59d66b17267b9a2594536591a223c3d8adf626b777c46ea8b836c3a3d404a34.scope.
Oct 11 04:27:38 compute-0 sudo[98186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:38 compute-0 sudo[98186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:38 compute-0 sudo[98186]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:38 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8601065f171d3caadae38d561b1f4aa498245e1b356d5064134ddd2a1a29fd50/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8601065f171d3caadae38d561b1f4aa498245e1b356d5064134ddd2a1a29fd50/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:38 compute-0 podman[98169]: 2025-10-11 04:27:38.265029854 +0000 UTC m=+0.020060671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:38 compute-0 podman[98169]: 2025-10-11 04:27:38.364430621 +0000 UTC m=+0.119461478 container init b59d66b17267b9a2594536591a223c3d8adf626b777c46ea8b836c3a3d404a34 (image=quay.io/ceph/ceph:v18, name=modest_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:27:38 compute-0 podman[98169]: 2025-10-11 04:27:38.369955449 +0000 UTC m=+0.124986256 container start b59d66b17267b9a2594536591a223c3d8adf626b777c46ea8b836c3a3d404a34 (image=quay.io/ceph/ceph:v18, name=modest_curran, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:27:38 compute-0 podman[98169]: 2025-10-11 04:27:38.373482947 +0000 UTC m=+0.128513854 container attach b59d66b17267b9a2594536591a223c3d8adf626b777c46ea8b836c3a3d404a34 (image=quay.io/ceph/ceph:v18, name=modest_curran, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 04:27:38 compute-0 sudo[98218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:27:38 compute-0 sudo[98218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:38 compute-0 podman[98302]: 2025-10-11 04:27:38.776667996 +0000 UTC m=+0.045784152 container create 304bd9238bb6005490295ff1d8e7c820f20605858f3982c9d9ed68a704825718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:38 compute-0 systemd[1]: Started libpod-conmon-304bd9238bb6005490295ff1d8e7c820f20605858f3982c9d9ed68a704825718.scope.
Oct 11 04:27:38 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:38 compute-0 podman[98302]: 2025-10-11 04:27:38.761911298 +0000 UTC m=+0.031027474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:38 compute-0 podman[98302]: 2025-10-11 04:27:38.863505931 +0000 UTC m=+0.132622107 container init 304bd9238bb6005490295ff1d8e7c820f20605858f3982c9d9ed68a704825718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:27:38 compute-0 podman[98302]: 2025-10-11 04:27:38.868240929 +0000 UTC m=+0.137357095 container start 304bd9238bb6005490295ff1d8e7c820f20605858f3982c9d9ed68a704825718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_taussig, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:27:38 compute-0 podman[98302]: 2025-10-11 04:27:38.872372832 +0000 UTC m=+0.141489038 container attach 304bd9238bb6005490295ff1d8e7c820f20605858f3982c9d9ed68a704825718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_taussig, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:27:38 compute-0 infallible_taussig[98318]: 167 167
Oct 11 04:27:38 compute-0 systemd[1]: libpod-304bd9238bb6005490295ff1d8e7c820f20605858f3982c9d9ed68a704825718.scope: Deactivated successfully.
Oct 11 04:27:38 compute-0 podman[98302]: 2025-10-11 04:27:38.875312215 +0000 UTC m=+0.144428411 container died 304bd9238bb6005490295ff1d8e7c820f20605858f3982c9d9ed68a704825718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_taussig, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:27:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-252ed5bae83b75d26ae767864391db6b1b007bdd8f23f570ca8f06ada1d3761a-merged.mount: Deactivated successfully.
Oct 11 04:27:38 compute-0 podman[98302]: 2025-10-11 04:27:38.928115061 +0000 UTC m=+0.197231257 container remove 304bd9238bb6005490295ff1d8e7c820f20605858f3982c9d9ed68a704825718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_taussig, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:27:38 compute-0 systemd[1]: libpod-conmon-304bd9238bb6005490295ff1d8e7c820f20605858f3982c9d9ed68a704825718.scope: Deactivated successfully.
Oct 11 04:27:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 11 04:27:38 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/747083500' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 04:27:38 compute-0 modest_curran[98214]: 
Oct 11 04:27:38 compute-0 modest_curran[98214]: {"fsid":"166d0489-2ae7-59eb-961c-c1b5cda4b45a","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":149,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":31,"num_osds":3,"num_up_osds":3,"osd_up_since":1760156827,"num_in_osds":3,"osd_in_since":1760156798,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83845120,"bytes_avail":64328081408,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-11T04:26:58.031384+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Oct 11 04:27:38 compute-0 systemd[1]: libpod-b59d66b17267b9a2594536591a223c3d8adf626b777c46ea8b836c3a3d404a34.scope: Deactivated successfully.
Oct 11 04:27:38 compute-0 podman[98169]: 2025-10-11 04:27:38.980342643 +0000 UTC m=+0.735373440 container died b59d66b17267b9a2594536591a223c3d8adf626b777c46ea8b836c3a3d404a34 (image=quay.io/ceph/ceph:v18, name=modest_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:27:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-8601065f171d3caadae38d561b1f4aa498245e1b356d5064134ddd2a1a29fd50-merged.mount: Deactivated successfully.
Oct 11 04:27:39 compute-0 podman[98169]: 2025-10-11 04:27:39.027388575 +0000 UTC m=+0.782419362 container remove b59d66b17267b9a2594536591a223c3d8adf626b777c46ea8b836c3a3d404a34 (image=quay.io/ceph/ceph:v18, name=modest_curran, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 04:27:39 compute-0 systemd[1]: libpod-conmon-b59d66b17267b9a2594536591a223c3d8adf626b777c46ea8b836c3a3d404a34.scope: Deactivated successfully.
Oct 11 04:27:39 compute-0 sudo[98112]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:39 compute-0 ceph-mon[74243]: pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:27:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:27:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:27:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:39 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/747083500' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 04:27:39 compute-0 podman[98356]: 2025-10-11 04:27:39.146511784 +0000 UTC m=+0.053901064 container create 913e768cf20d7cfcda1c99ec47f57c90488344a6451c6eb01885799cdf52ae9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:27:39 compute-0 systemd[1]: Started libpod-conmon-913e768cf20d7cfcda1c99ec47f57c90488344a6451c6eb01885799cdf52ae9f.scope.
Oct 11 04:27:39 compute-0 podman[98356]: 2025-10-11 04:27:39.127142542 +0000 UTC m=+0.034531862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:39 compute-0 sudo[98396]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgolapdcvdquefbkaevhvpljppddzcwx ; /usr/bin/python3'
Oct 11 04:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7db1cdbe4414e933e156a435671a279d3cc80fc073e33035eb049acb1255bdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7db1cdbe4414e933e156a435671a279d3cc80fc073e33035eb049acb1255bdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7db1cdbe4414e933e156a435671a279d3cc80fc073e33035eb049acb1255bdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7db1cdbe4414e933e156a435671a279d3cc80fc073e33035eb049acb1255bdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7db1cdbe4414e933e156a435671a279d3cc80fc073e33035eb049acb1255bdd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:39 compute-0 sudo[98396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:39 compute-0 podman[98356]: 2025-10-11 04:27:39.241920113 +0000 UTC m=+0.149309393 container init 913e768cf20d7cfcda1c99ec47f57c90488344a6451c6eb01885799cdf52ae9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:27:39 compute-0 podman[98356]: 2025-10-11 04:27:39.251750558 +0000 UTC m=+0.159139868 container start 913e768cf20d7cfcda1c99ec47f57c90488344a6451c6eb01885799cdf52ae9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:39 compute-0 podman[98356]: 2025-10-11 04:27:39.255540242 +0000 UTC m=+0.162929532 container attach 913e768cf20d7cfcda1c99ec47f57c90488344a6451c6eb01885799cdf52ae9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:39 compute-0 python3[98401]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:39 compute-0 podman[98404]: 2025-10-11 04:27:39.464588063 +0000 UTC m=+0.049396163 container create 8f30befe47d8f8c92d5e413d144d2565613b9b8b1be0c72a8c4e0f4092cd1975 (image=quay.io/ceph/ceph:v18, name=trusting_pasteur, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:39 compute-0 systemd[1]: Started libpod-conmon-8f30befe47d8f8c92d5e413d144d2565613b9b8b1be0c72a8c4e0f4092cd1975.scope.
Oct 11 04:27:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aa6c9fdc2b78c0115a0b430405616dcb053630eef3a8c88bc203159365ed29b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aa6c9fdc2b78c0115a0b430405616dcb053630eef3a8c88bc203159365ed29b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:39 compute-0 podman[98404]: 2025-10-11 04:27:39.439302362 +0000 UTC m=+0.024110542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:39 compute-0 podman[98404]: 2025-10-11 04:27:39.536354271 +0000 UTC m=+0.121162411 container init 8f30befe47d8f8c92d5e413d144d2565613b9b8b1be0c72a8c4e0f4092cd1975 (image=quay.io/ceph/ceph:v18, name=trusting_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:27:39 compute-0 podman[98404]: 2025-10-11 04:27:39.547194542 +0000 UTC m=+0.132002672 container start 8f30befe47d8f8c92d5e413d144d2565613b9b8b1be0c72a8c4e0f4092cd1975 (image=quay.io/ceph/ceph:v18, name=trusting_pasteur, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:27:39 compute-0 podman[98404]: 2025-10-11 04:27:39.5511487 +0000 UTC m=+0.135956810 container attach 8f30befe47d8f8c92d5e413d144d2565613b9b8b1be0c72a8c4e0f4092cd1975 (image=quay.io/ceph/ceph:v18, name=trusting_pasteur, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:27:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:27:40 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2629034991' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:27:40 compute-0 trusting_pasteur[98417]: 
Oct 11 04:27:40 compute-0 trusting_pasteur[98417]: {"epoch":1,"fsid":"166d0489-2ae7-59eb-961c-c1b5cda4b45a","modified":"2025-10-11T04:25:03.715959Z","created":"2025-10-11T04:25:03.715959Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Oct 11 04:27:40 compute-0 trusting_pasteur[98417]: dumped monmap epoch 1
Oct 11 04:27:40 compute-0 systemd[1]: libpod-8f30befe47d8f8c92d5e413d144d2565613b9b8b1be0c72a8c4e0f4092cd1975.scope: Deactivated successfully.
Oct 11 04:27:40 compute-0 podman[98404]: 2025-10-11 04:27:40.129901375 +0000 UTC m=+0.714709465 container died 8f30befe47d8f8c92d5e413d144d2565613b9b8b1be0c72a8c4e0f4092cd1975 (image=quay.io/ceph/ceph:v18, name=trusting_pasteur, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:27:40 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2629034991' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:27:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aa6c9fdc2b78c0115a0b430405616dcb053630eef3a8c88bc203159365ed29b-merged.mount: Deactivated successfully.
Oct 11 04:27:40 compute-0 podman[98404]: 2025-10-11 04:27:40.171572114 +0000 UTC m=+0.756380204 container remove 8f30befe47d8f8c92d5e413d144d2565613b9b8b1be0c72a8c4e0f4092cd1975 (image=quay.io/ceph/ceph:v18, name=trusting_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:27:40 compute-0 systemd[1]: libpod-conmon-8f30befe47d8f8c92d5e413d144d2565613b9b8b1be0c72a8c4e0f4092cd1975.scope: Deactivated successfully.
Oct 11 04:27:40 compute-0 sudo[98396]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:40 compute-0 optimistic_buck[98397]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:27:40 compute-0 optimistic_buck[98397]: --> relative data size: 1.0
Oct 11 04:27:40 compute-0 optimistic_buck[98397]: --> All data devices are unavailable
Oct 11 04:27:40 compute-0 systemd[1]: libpod-913e768cf20d7cfcda1c99ec47f57c90488344a6451c6eb01885799cdf52ae9f.scope: Deactivated successfully.
Oct 11 04:27:40 compute-0 systemd[1]: libpod-913e768cf20d7cfcda1c99ec47f57c90488344a6451c6eb01885799cdf52ae9f.scope: Consumed 1.030s CPU time.
Oct 11 04:27:40 compute-0 podman[98478]: 2025-10-11 04:27:40.397850084 +0000 UTC m=+0.039970007 container died 913e768cf20d7cfcda1c99ec47f57c90488344a6451c6eb01885799cdf52ae9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7db1cdbe4414e933e156a435671a279d3cc80fc073e33035eb049acb1255bdd-merged.mount: Deactivated successfully.
Oct 11 04:27:40 compute-0 podman[98478]: 2025-10-11 04:27:40.462120796 +0000 UTC m=+0.104240699 container remove 913e768cf20d7cfcda1c99ec47f57c90488344a6451c6eb01885799cdf52ae9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:40 compute-0 systemd[1]: libpod-conmon-913e768cf20d7cfcda1c99ec47f57c90488344a6451c6eb01885799cdf52ae9f.scope: Deactivated successfully.
Oct 11 04:27:40 compute-0 sudo[98218]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:40 compute-0 sudo[98536]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilbhemmblwasnxosdnztwvjbbuaxnxwx ; /usr/bin/python3'
Oct 11 04:27:40 compute-0 sudo[98536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:40 compute-0 sudo[98499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:40 compute-0 sudo[98499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:40 compute-0 sudo[98499]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:40 compute-0 sudo[98544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:40 compute-0 sudo[98544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:40 compute-0 sudo[98544]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:40 compute-0 sudo[98569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:40 compute-0 sudo[98569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:40 compute-0 sudo[98569]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:40 compute-0 python3[98542]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:40 compute-0 sudo[98594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:27:40 compute-0 sudo[98594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:40 compute-0 podman[98596]: 2025-10-11 04:27:40.793248389 +0000 UTC m=+0.061604076 container create fce747fdc9308ddc5d48b7e9b9aefde6d46d00efc4116b873abd20a81625d785 (image=quay.io/ceph/ceph:v18, name=agitated_leakey, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:40 compute-0 systemd[1]: Started libpod-conmon-fce747fdc9308ddc5d48b7e9b9aefde6d46d00efc4116b873abd20a81625d785.scope.
Oct 11 04:27:40 compute-0 podman[98596]: 2025-10-11 04:27:40.763238301 +0000 UTC m=+0.031593988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acce53c9224bd487b4b25b93d5d6780bc02876ebf78297c2a0b92940c4c8fe75/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acce53c9224bd487b4b25b93d5d6780bc02876ebf78297c2a0b92940c4c8fe75/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:40 compute-0 podman[98596]: 2025-10-11 04:27:40.896805451 +0000 UTC m=+0.165161138 container init fce747fdc9308ddc5d48b7e9b9aefde6d46d00efc4116b873abd20a81625d785 (image=quay.io/ceph/ceph:v18, name=agitated_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 04:27:40 compute-0 podman[98596]: 2025-10-11 04:27:40.901955239 +0000 UTC m=+0.170310896 container start fce747fdc9308ddc5d48b7e9b9aefde6d46d00efc4116b873abd20a81625d785 (image=quay.io/ceph/ceph:v18, name=agitated_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:40 compute-0 podman[98596]: 2025-10-11 04:27:40.905271762 +0000 UTC m=+0.173627469 container attach fce747fdc9308ddc5d48b7e9b9aefde6d46d00efc4116b873abd20a81625d785 (image=quay.io/ceph/ceph:v18, name=agitated_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:41 compute-0 podman[98679]: 2025-10-11 04:27:41.086277573 +0000 UTC m=+0.048925820 container create e4a3cba3ac9dd77cda547975cf1f1919c75b8bf0a2df182d64acc24c3a42816a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:27:41 compute-0 systemd[1]: Started libpod-conmon-e4a3cba3ac9dd77cda547975cf1f1919c75b8bf0a2df182d64acc24c3a42816a.scope.
Oct 11 04:27:41 compute-0 podman[98679]: 2025-10-11 04:27:41.060624974 +0000 UTC m=+0.023273311 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:41 compute-0 ceph-mon[74243]: pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:41 compute-0 podman[98679]: 2025-10-11 04:27:41.168853401 +0000 UTC m=+0.131501688 container init e4a3cba3ac9dd77cda547975cf1f1919c75b8bf0a2df182d64acc24c3a42816a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_roentgen, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:27:41 compute-0 podman[98679]: 2025-10-11 04:27:41.179418155 +0000 UTC m=+0.142066442 container start e4a3cba3ac9dd77cda547975cf1f1919c75b8bf0a2df182d64acc24c3a42816a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_roentgen, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:27:41 compute-0 loving_roentgen[98695]: 167 167
Oct 11 04:27:41 compute-0 systemd[1]: libpod-e4a3cba3ac9dd77cda547975cf1f1919c75b8bf0a2df182d64acc24c3a42816a.scope: Deactivated successfully.
Oct 11 04:27:41 compute-0 podman[98679]: 2025-10-11 04:27:41.183844005 +0000 UTC m=+0.146492282 container attach e4a3cba3ac9dd77cda547975cf1f1919c75b8bf0a2df182d64acc24c3a42816a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_roentgen, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:27:41 compute-0 podman[98679]: 2025-10-11 04:27:41.184313127 +0000 UTC m=+0.146961404 container died e4a3cba3ac9dd77cda547975cf1f1919c75b8bf0a2df182d64acc24c3a42816a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_roentgen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:27:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5585efdf78eb787487eb62e53a900e940405172eb34b191d462cfbdf4a47353e-merged.mount: Deactivated successfully.
Oct 11 04:27:41 compute-0 podman[98679]: 2025-10-11 04:27:41.227077823 +0000 UTC m=+0.189726080 container remove e4a3cba3ac9dd77cda547975cf1f1919c75b8bf0a2df182d64acc24c3a42816a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_roentgen, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:27:41 compute-0 systemd[1]: libpod-conmon-e4a3cba3ac9dd77cda547975cf1f1919c75b8bf0a2df182d64acc24c3a42816a.scope: Deactivated successfully.
Oct 11 04:27:41 compute-0 podman[98736]: 2025-10-11 04:27:41.391228514 +0000 UTC m=+0.043796173 container create ec0dba9ce4363bb35bad96731682ad0cd6227acc2874ce80ce28e4a9d8192666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:27:41 compute-0 systemd[1]: Started libpod-conmon-ec0dba9ce4363bb35bad96731682ad0cd6227acc2874ce80ce28e4a9d8192666.scope.
Oct 11 04:27:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d264964f4f39dad1fb7d91438fc3f5058632f3a74b0db412e21582c0b347df6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d264964f4f39dad1fb7d91438fc3f5058632f3a74b0db412e21582c0b347df6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d264964f4f39dad1fb7d91438fc3f5058632f3a74b0db412e21582c0b347df6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d264964f4f39dad1fb7d91438fc3f5058632f3a74b0db412e21582c0b347df6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:41 compute-0 podman[98736]: 2025-10-11 04:27:41.374701132 +0000 UTC m=+0.027268811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:41 compute-0 podman[98736]: 2025-10-11 04:27:41.480044107 +0000 UTC m=+0.132611796 container init ec0dba9ce4363bb35bad96731682ad0cd6227acc2874ce80ce28e4a9d8192666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:41 compute-0 podman[98736]: 2025-10-11 04:27:41.491583374 +0000 UTC m=+0.144151063 container start ec0dba9ce4363bb35bad96731682ad0cd6227acc2874ce80ce28e4a9d8192666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_clarke, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:41 compute-0 podman[98736]: 2025-10-11 04:27:41.495555213 +0000 UTC m=+0.148122962 container attach ec0dba9ce4363bb35bad96731682ad0cd6227acc2874ce80ce28e4a9d8192666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_clarke, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:27:41 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Oct 11 04:27:41 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/8771515' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 11 04:27:41 compute-0 agitated_leakey[98635]: [client.openstack]
Oct 11 04:27:41 compute-0 agitated_leakey[98635]:         key = AQAF3OloAAAAABAA2VyWzcR4rbz4VVd/gkSHkQ==
Oct 11 04:27:41 compute-0 agitated_leakey[98635]:         caps mgr = "allow *"
Oct 11 04:27:41 compute-0 agitated_leakey[98635]:         caps mon = "profile rbd"
Oct 11 04:27:41 compute-0 agitated_leakey[98635]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct 11 04:27:41 compute-0 systemd[1]: libpod-fce747fdc9308ddc5d48b7e9b9aefde6d46d00efc4116b873abd20a81625d785.scope: Deactivated successfully.
Oct 11 04:27:41 compute-0 podman[98596]: 2025-10-11 04:27:41.530158346 +0000 UTC m=+0.798514033 container died fce747fdc9308ddc5d48b7e9b9aefde6d46d00efc4116b873abd20a81625d785 (image=quay.io/ceph/ceph:v18, name=agitated_leakey, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:27:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-acce53c9224bd487b4b25b93d5d6780bc02876ebf78297c2a0b92940c4c8fe75-merged.mount: Deactivated successfully.
Oct 11 04:27:41 compute-0 podman[98596]: 2025-10-11 04:27:41.581385183 +0000 UTC m=+0.849740870 container remove fce747fdc9308ddc5d48b7e9b9aefde6d46d00efc4116b873abd20a81625d785 (image=quay.io/ceph/ceph:v18, name=agitated_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:41 compute-0 systemd[1]: libpod-conmon-fce747fdc9308ddc5d48b7e9b9aefde6d46d00efc4116b873abd20a81625d785.scope: Deactivated successfully.
Oct 11 04:27:41 compute-0 sudo[98536]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:42 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/8771515' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 11 04:27:42 compute-0 sad_clarke[98752]: {
Oct 11 04:27:42 compute-0 sad_clarke[98752]:     "0": [
Oct 11 04:27:42 compute-0 sad_clarke[98752]:         {
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "devices": [
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "/dev/loop3"
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             ],
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "lv_name": "ceph_lv0",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "lv_size": "21470642176",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "name": "ceph_lv0",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "tags": {
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.cluster_name": "ceph",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.crush_device_class": "",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.encrypted": "0",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.osd_id": "0",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.type": "block",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.vdo": "0"
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             },
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "type": "block",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "vg_name": "ceph_vg0"
Oct 11 04:27:42 compute-0 sad_clarke[98752]:         }
Oct 11 04:27:42 compute-0 sad_clarke[98752]:     ],
Oct 11 04:27:42 compute-0 sad_clarke[98752]:     "1": [
Oct 11 04:27:42 compute-0 sad_clarke[98752]:         {
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "devices": [
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "/dev/loop4"
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             ],
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "lv_name": "ceph_lv1",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "lv_size": "21470642176",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "name": "ceph_lv1",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "tags": {
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.cluster_name": "ceph",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.crush_device_class": "",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.encrypted": "0",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.osd_id": "1",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.type": "block",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.vdo": "0"
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             },
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "type": "block",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "vg_name": "ceph_vg1"
Oct 11 04:27:42 compute-0 sad_clarke[98752]:         }
Oct 11 04:27:42 compute-0 sad_clarke[98752]:     ],
Oct 11 04:27:42 compute-0 sad_clarke[98752]:     "2": [
Oct 11 04:27:42 compute-0 sad_clarke[98752]:         {
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "devices": [
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "/dev/loop5"
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             ],
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "lv_name": "ceph_lv2",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "lv_size": "21470642176",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "name": "ceph_lv2",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "tags": {
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.cluster_name": "ceph",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.crush_device_class": "",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.encrypted": "0",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.osd_id": "2",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.type": "block",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:                 "ceph.vdo": "0"
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             },
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "type": "block",
Oct 11 04:27:42 compute-0 sad_clarke[98752]:             "vg_name": "ceph_vg2"
Oct 11 04:27:42 compute-0 sad_clarke[98752]:         }
Oct 11 04:27:42 compute-0 sad_clarke[98752]:     ]
Oct 11 04:27:42 compute-0 sad_clarke[98752]: }
Oct 11 04:27:42 compute-0 systemd[1]: libpod-ec0dba9ce4363bb35bad96731682ad0cd6227acc2874ce80ce28e4a9d8192666.scope: Deactivated successfully.
Oct 11 04:27:42 compute-0 conmon[98752]: conmon ec0dba9ce4363bb35bad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec0dba9ce4363bb35bad96731682ad0cd6227acc2874ce80ce28e4a9d8192666.scope/container/memory.events
Oct 11 04:27:42 compute-0 podman[98736]: 2025-10-11 04:27:42.255393412 +0000 UTC m=+0.907961131 container died ec0dba9ce4363bb35bad96731682ad0cd6227acc2874ce80ce28e4a9d8192666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:27:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-d264964f4f39dad1fb7d91438fc3f5058632f3a74b0db412e21582c0b347df6a-merged.mount: Deactivated successfully.
Oct 11 04:27:42 compute-0 podman[98736]: 2025-10-11 04:27:42.324922445 +0000 UTC m=+0.977490114 container remove ec0dba9ce4363bb35bad96731682ad0cd6227acc2874ce80ce28e4a9d8192666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_clarke, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:42 compute-0 systemd[1]: libpod-conmon-ec0dba9ce4363bb35bad96731682ad0cd6227acc2874ce80ce28e4a9d8192666.scope: Deactivated successfully.
Oct 11 04:27:42 compute-0 sudo[98594]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:42 compute-0 sudo[98789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:42 compute-0 sudo[98789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:42 compute-0 sudo[98789]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:42 compute-0 sudo[98814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:42 compute-0 sudo[98814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:42 compute-0 sudo[98814]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:42 compute-0 sudo[98839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:42 compute-0 sudo[98839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:42 compute-0 sudo[98839]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:42 compute-0 sudo[98905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:27:42 compute-0 sudo[98905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:42 compute-0 sudo[99061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wltpkslrmevxcyfrublypdrpociggjlk ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760156862.5568478-33267-69055870064575/async_wrapper.py j470280230182 30 /home/zuul/.ansible/tmp/ansible-tmp-1760156862.5568478-33267-69055870064575/AnsiballZ_command.py _'
Oct 11 04:27:42 compute-0 sudo[99061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:43 compute-0 podman[99079]: 2025-10-11 04:27:43.041122667 +0000 UTC m=+0.050744366 container create 6c2067c2fd6fc62ccac3fc6559932620ffbdc86ae2e342f8ddb0a428788fca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_stonebraker, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 04:27:43 compute-0 ansible-async_wrapper.py[99064]: Invoked with j470280230182 30 /home/zuul/.ansible/tmp/ansible-tmp-1760156862.5568478-33267-69055870064575/AnsiballZ_command.py _
Oct 11 04:27:43 compute-0 ansible-async_wrapper.py[99097]: Starting module and watcher
Oct 11 04:27:43 compute-0 ansible-async_wrapper.py[99097]: Start watching 99098 (30)
Oct 11 04:27:43 compute-0 ansible-async_wrapper.py[99098]: Start module (99098)
Oct 11 04:27:43 compute-0 ansible-async_wrapper.py[99064]: Return async_wrapper task started.
Oct 11 04:27:43 compute-0 systemd[1]: Started libpod-conmon-6c2067c2fd6fc62ccac3fc6559932620ffbdc86ae2e342f8ddb0a428788fca13.scope.
Oct 11 04:27:43 compute-0 sudo[99061]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:43 compute-0 podman[99079]: 2025-10-11 04:27:43.018767099 +0000 UTC m=+0.028388868 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:43 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:43 compute-0 podman[99079]: 2025-10-11 04:27:43.137229472 +0000 UTC m=+0.146851151 container init 6c2067c2fd6fc62ccac3fc6559932620ffbdc86ae2e342f8ddb0a428788fca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:27:43 compute-0 podman[99079]: 2025-10-11 04:27:43.144821911 +0000 UTC m=+0.154443630 container start 6c2067c2fd6fc62ccac3fc6559932620ffbdc86ae2e342f8ddb0a428788fca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:27:43 compute-0 zen_stonebraker[99100]: 167 167
Oct 11 04:27:43 compute-0 systemd[1]: libpod-6c2067c2fd6fc62ccac3fc6559932620ffbdc86ae2e342f8ddb0a428788fca13.scope: Deactivated successfully.
Oct 11 04:27:43 compute-0 podman[99079]: 2025-10-11 04:27:43.150942124 +0000 UTC m=+0.160563823 container attach 6c2067c2fd6fc62ccac3fc6559932620ffbdc86ae2e342f8ddb0a428788fca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:43 compute-0 podman[99079]: 2025-10-11 04:27:43.151133319 +0000 UTC m=+0.160754998 container died 6c2067c2fd6fc62ccac3fc6559932620ffbdc86ae2e342f8ddb0a428788fca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:27:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa2a0766e9821bb3084a3f6453661f58b9fbf0a817810bc2e5405df0981ea53a-merged.mount: Deactivated successfully.
Oct 11 04:27:43 compute-0 ceph-mon[74243]: pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:43 compute-0 podman[99079]: 2025-10-11 04:27:43.191505575 +0000 UTC m=+0.201127264 container remove 6c2067c2fd6fc62ccac3fc6559932620ffbdc86ae2e342f8ddb0a428788fca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_stonebraker, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:43 compute-0 systemd[1]: libpod-conmon-6c2067c2fd6fc62ccac3fc6559932620ffbdc86ae2e342f8ddb0a428788fca13.scope: Deactivated successfully.
Oct 11 04:27:43 compute-0 python3[99099]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:43 compute-0 podman[99118]: 2025-10-11 04:27:43.285500378 +0000 UTC m=+0.041984118 container create 57169c1ae0537dca2ec60cc450d60eaa9ea15cfbb45ab30706b809e0268b978e (image=quay.io/ceph/ceph:v18, name=thirsty_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:27:43 compute-0 systemd[1]: Started libpod-conmon-57169c1ae0537dca2ec60cc450d60eaa9ea15cfbb45ab30706b809e0268b978e.scope.
Oct 11 04:27:43 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:43 compute-0 podman[99138]: 2025-10-11 04:27:43.347796321 +0000 UTC m=+0.033532387 container create af334ed0dc89680f4c77a6523098d1477c8b1f8827271a1b4f902083c980c0c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f490523b03dd523f5b1d8bb7cb880dc688391f6fc3b648dc819b06d81c0e28/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f490523b03dd523f5b1d8bb7cb880dc688391f6fc3b648dc819b06d81c0e28/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:43 compute-0 podman[99118]: 2025-10-11 04:27:43.268867633 +0000 UTC m=+0.025351373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:43 compute-0 podman[99118]: 2025-10-11 04:27:43.371125572 +0000 UTC m=+0.127609352 container init 57169c1ae0537dca2ec60cc450d60eaa9ea15cfbb45ab30706b809e0268b978e (image=quay.io/ceph/ceph:v18, name=thirsty_fermat, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:27:43 compute-0 podman[99118]: 2025-10-11 04:27:43.386074695 +0000 UTC m=+0.142558435 container start 57169c1ae0537dca2ec60cc450d60eaa9ea15cfbb45ab30706b809e0268b978e (image=quay.io/ceph/ceph:v18, name=thirsty_fermat, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:43 compute-0 podman[99118]: 2025-10-11 04:27:43.389361286 +0000 UTC m=+0.145845056 container attach 57169c1ae0537dca2ec60cc450d60eaa9ea15cfbb45ab30706b809e0268b978e (image=quay.io/ceph/ceph:v18, name=thirsty_fermat, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:43 compute-0 systemd[1]: Started libpod-conmon-af334ed0dc89680f4c77a6523098d1477c8b1f8827271a1b4f902083c980c0c1.scope.
Oct 11 04:27:43 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4337fd5965d0537174e7b739f779a3309333098b0d78c6b015537e4971964743/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4337fd5965d0537174e7b739f779a3309333098b0d78c6b015537e4971964743/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4337fd5965d0537174e7b739f779a3309333098b0d78c6b015537e4971964743/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4337fd5965d0537174e7b739f779a3309333098b0d78c6b015537e4971964743/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:43 compute-0 podman[99138]: 2025-10-11 04:27:43.333425832 +0000 UTC m=+0.019161918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:43 compute-0 podman[99138]: 2025-10-11 04:27:43.445786023 +0000 UTC m=+0.131522109 container init af334ed0dc89680f4c77a6523098d1477c8b1f8827271a1b4f902083c980c0c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct 11 04:27:43 compute-0 podman[99138]: 2025-10-11 04:27:43.453467164 +0000 UTC m=+0.139203230 container start af334ed0dc89680f4c77a6523098d1477c8b1f8827271a1b4f902083c980c0c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sutherland, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:27:43 compute-0 podman[99138]: 2025-10-11 04:27:43.456581442 +0000 UTC m=+0.142317568 container attach af334ed0dc89680f4c77a6523098d1477c8b1f8827271a1b4f902083c980c0c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:27:43 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 04:27:43 compute-0 thirsty_fermat[99148]: 
Oct 11 04:27:43 compute-0 thirsty_fermat[99148]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 11 04:27:43 compute-0 systemd[1]: libpod-57169c1ae0537dca2ec60cc450d60eaa9ea15cfbb45ab30706b809e0268b978e.scope: Deactivated successfully.
Oct 11 04:27:43 compute-0 podman[99118]: 2025-10-11 04:27:43.941264273 +0000 UTC m=+0.697748043 container died 57169c1ae0537dca2ec60cc450d60eaa9ea15cfbb45ab30706b809e0268b978e (image=quay.io/ceph/ceph:v18, name=thirsty_fermat, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:27:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-50f490523b03dd523f5b1d8bb7cb880dc688391f6fc3b648dc819b06d81c0e28-merged.mount: Deactivated successfully.
Oct 11 04:27:44 compute-0 podman[99118]: 2025-10-11 04:27:44.009170295 +0000 UTC m=+0.765654065 container remove 57169c1ae0537dca2ec60cc450d60eaa9ea15cfbb45ab30706b809e0268b978e (image=quay.io/ceph/ceph:v18, name=thirsty_fermat, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:27:44 compute-0 systemd[1]: libpod-conmon-57169c1ae0537dca2ec60cc450d60eaa9ea15cfbb45ab30706b809e0268b978e.scope: Deactivated successfully.
Oct 11 04:27:44 compute-0 ansible-async_wrapper.py[99098]: Module complete (99098)
Oct 11 04:27:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:44 compute-0 sudo[99255]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpjcocgxhdejxpdaojqmhnybjcvxrjmk ; /usr/bin/python3'
Oct 11 04:27:44 compute-0 sudo[99255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:44 compute-0 python3[99260]: ansible-ansible.legacy.async_status Invoked with jid=j470280230182.99064 mode=status _async_dir=/root/.ansible_async
Oct 11 04:27:44 compute-0 sudo[99255]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:44 compute-0 musing_sutherland[99158]: {
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:         "osd_id": 1,
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:         "type": "bluestore"
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:     },
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:         "osd_id": 0,
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:         "type": "bluestore"
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:     },
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:         "osd_id": 2,
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:         "type": "bluestore"
Oct 11 04:27:44 compute-0 musing_sutherland[99158]:     }
Oct 11 04:27:44 compute-0 musing_sutherland[99158]: }
Oct 11 04:27:44 compute-0 systemd[1]: libpod-af334ed0dc89680f4c77a6523098d1477c8b1f8827271a1b4f902083c980c0c1.scope: Deactivated successfully.
Oct 11 04:27:44 compute-0 systemd[1]: libpod-af334ed0dc89680f4c77a6523098d1477c8b1f8827271a1b4f902083c980c0c1.scope: Consumed 1.054s CPU time.
Oct 11 04:27:44 compute-0 podman[99138]: 2025-10-11 04:27:44.507981688 +0000 UTC m=+1.193717794 container died af334ed0dc89680f4c77a6523098d1477c8b1f8827271a1b4f902083c980c0c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-4337fd5965d0537174e7b739f779a3309333098b0d78c6b015537e4971964743-merged.mount: Deactivated successfully.
Oct 11 04:27:44 compute-0 sudo[99331]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khnlvbvurtllfwuranrehjufsjrwjbjy ; /usr/bin/python3'
Oct 11 04:27:44 compute-0 sudo[99331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:44 compute-0 podman[99138]: 2025-10-11 04:27:44.583672745 +0000 UTC m=+1.269408851 container remove af334ed0dc89680f4c77a6523098d1477c8b1f8827271a1b4f902083c980c0c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sutherland, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:27:44 compute-0 systemd[1]: libpod-conmon-af334ed0dc89680f4c77a6523098d1477c8b1f8827271a1b4f902083c980c0c1.scope: Deactivated successfully.
Oct 11 04:27:44 compute-0 sudo[98905]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:27:44 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:27:44 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:44 compute-0 ceph-mgr[74542]: [progress INFO root] update: starting ev 933b04f4-a124-4774-9b27-5c97cefba307 (Updating rgw.rgw deployment (+1 -> 1))
Oct 11 04:27:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xojlng", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Oct 11 04:27:44 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xojlng", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 11 04:27:44 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xojlng", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 11 04:27:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Oct 11 04:27:44 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:27:44 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:44 compute-0 ceph-mgr[74542]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.xojlng on compute-0
Oct 11 04:27:44 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.xojlng on compute-0
Oct 11 04:27:44 compute-0 python3[99334]: ansible-ansible.legacy.async_status Invoked with jid=j470280230182.99064 mode=cleanup _async_dir=/root/.ansible_async
Oct 11 04:27:44 compute-0 sudo[99331]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:44 compute-0 sudo[99335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:44 compute-0 sudo[99335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:44 compute-0 sudo[99335]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:44 compute-0 sudo[99360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:44 compute-0 sudo[99360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:44 compute-0 sudo[99360]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:44 compute-0 sudo[99385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:44 compute-0 sudo[99385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:44 compute-0 sudo[99385]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:44 compute-0 sudo[99410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:27:44 compute-0 sudo[99410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:45 compute-0 sudo[99471]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuphwswoobpcbsybanxxarzhhzetjynx ; /usr/bin/python3'
Oct 11 04:27:45 compute-0 sudo[99471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:45 compute-0 ceph-mon[74243]: from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 04:27:45 compute-0 ceph-mon[74243]: pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:45 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:45 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:45 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xojlng", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 11 04:27:45 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xojlng", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 11 04:27:45 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:45 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:45 compute-0 podman[99503]: 2025-10-11 04:27:45.273674822 +0000 UTC m=+0.035300991 container create 80e08dea58f7a63f3d2e545fc308ed74ac0f8e5fcbebfbbe62a43896e367e188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:45 compute-0 python3[99477]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:45 compute-0 systemd[1]: Started libpod-conmon-80e08dea58f7a63f3d2e545fc308ed74ac0f8e5fcbebfbbe62a43896e367e188.scope.
Oct 11 04:27:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:45 compute-0 podman[99503]: 2025-10-11 04:27:45.351170224 +0000 UTC m=+0.112796383 container init 80e08dea58f7a63f3d2e545fc308ed74ac0f8e5fcbebfbbe62a43896e367e188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_booth, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:45 compute-0 podman[99503]: 2025-10-11 04:27:45.258509284 +0000 UTC m=+0.020135453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:45 compute-0 podman[99503]: 2025-10-11 04:27:45.36426065 +0000 UTC m=+0.125886819 container start 80e08dea58f7a63f3d2e545fc308ed74ac0f8e5fcbebfbbe62a43896e367e188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:27:45 compute-0 podman[99503]: 2025-10-11 04:27:45.368045364 +0000 UTC m=+0.129671523 container attach 80e08dea58f7a63f3d2e545fc308ed74ac0f8e5fcbebfbbe62a43896e367e188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:45 compute-0 sweet_booth[99520]: 167 167
Oct 11 04:27:45 compute-0 systemd[1]: libpod-80e08dea58f7a63f3d2e545fc308ed74ac0f8e5fcbebfbbe62a43896e367e188.scope: Deactivated successfully.
Oct 11 04:27:45 compute-0 podman[99519]: 2025-10-11 04:27:45.373250004 +0000 UTC m=+0.057690869 container create ec27a1561b0c1829882d191ad4677e1ccfe6ee92f36fbe46a735e9be1e58d75a (image=quay.io/ceph/ceph:v18, name=beautiful_saha, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:45 compute-0 podman[99503]: 2025-10-11 04:27:45.388397912 +0000 UTC m=+0.150024061 container died 80e08dea58f7a63f3d2e545fc308ed74ac0f8e5fcbebfbbe62a43896e367e188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_booth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 04:27:45 compute-0 systemd[1]: Started libpod-conmon-ec27a1561b0c1829882d191ad4677e1ccfe6ee92f36fbe46a735e9be1e58d75a.scope.
Oct 11 04:27:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbf0bab12272bcbc428212cb6a9c1c5d3f4ef118289dd92af3af88b14119c66d-merged.mount: Deactivated successfully.
Oct 11 04:27:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:45 compute-0 podman[99503]: 2025-10-11 04:27:45.435524146 +0000 UTC m=+0.197150305 container remove 80e08dea58f7a63f3d2e545fc308ed74ac0f8e5fcbebfbbe62a43896e367e188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_booth, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:27:45 compute-0 podman[99519]: 2025-10-11 04:27:45.341927733 +0000 UTC m=+0.026368638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87242ff3d923332bf09e61d187653029823df7e65e91c93355b447a8deb13e09/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87242ff3d923332bf09e61d187653029823df7e65e91c93355b447a8deb13e09/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:45 compute-0 systemd[1]: libpod-conmon-80e08dea58f7a63f3d2e545fc308ed74ac0f8e5fcbebfbbe62a43896e367e188.scope: Deactivated successfully.
Oct 11 04:27:45 compute-0 podman[99519]: 2025-10-11 04:27:45.461439252 +0000 UTC m=+0.145880147 container init ec27a1561b0c1829882d191ad4677e1ccfe6ee92f36fbe46a735e9be1e58d75a (image=quay.io/ceph/ceph:v18, name=beautiful_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 04:27:45 compute-0 podman[99519]: 2025-10-11 04:27:45.467492743 +0000 UTC m=+0.151933608 container start ec27a1561b0c1829882d191ad4677e1ccfe6ee92f36fbe46a735e9be1e58d75a (image=quay.io/ceph/ceph:v18, name=beautiful_saha, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:27:45 compute-0 podman[99519]: 2025-10-11 04:27:45.472532869 +0000 UTC m=+0.156973774 container attach ec27a1561b0c1829882d191ad4677e1ccfe6ee92f36fbe46a735e9be1e58d75a (image=quay.io/ceph/ceph:v18, name=beautiful_saha, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:27:45 compute-0 systemd[1]: Reloading.
Oct 11 04:27:45 compute-0 systemd-rc-local-generator[99584]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:27:45 compute-0 systemd-sysv-generator[99588]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:27:45 compute-0 systemd[1]: Reloading.
Oct 11 04:27:45 compute-0 systemd-sysv-generator[99644]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:27:45 compute-0 systemd-rc-local-generator[99640]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:27:46 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 04:27:46 compute-0 beautiful_saha[99547]: 
Oct 11 04:27:46 compute-0 beautiful_saha[99547]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 11 04:27:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:46 compute-0 podman[99519]: 2025-10-11 04:27:46.049978348 +0000 UTC m=+0.734419263 container died ec27a1561b0c1829882d191ad4677e1ccfe6ee92f36fbe46a735e9be1e58d75a (image=quay.io/ceph/ceph:v18, name=beautiful_saha, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:27:46 compute-0 systemd[1]: libpod-ec27a1561b0c1829882d191ad4677e1ccfe6ee92f36fbe46a735e9be1e58d75a.scope: Deactivated successfully.
Oct 11 04:27:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-87242ff3d923332bf09e61d187653029823df7e65e91c93355b447a8deb13e09-merged.mount: Deactivated successfully.
Oct 11 04:27:46 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.xojlng for 166d0489-2ae7-59eb-961c-c1b5cda4b45a...
Oct 11 04:27:46 compute-0 podman[99519]: 2025-10-11 04:27:46.118970007 +0000 UTC m=+0.803410902 container remove ec27a1561b0c1829882d191ad4677e1ccfe6ee92f36fbe46a735e9be1e58d75a (image=quay.io/ceph/ceph:v18, name=beautiful_saha, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:27:46 compute-0 sudo[99471]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:46 compute-0 systemd[1]: libpod-conmon-ec27a1561b0c1829882d191ad4677e1ccfe6ee92f36fbe46a735e9be1e58d75a.scope: Deactivated successfully.
Oct 11 04:27:46 compute-0 ceph-mon[74243]: Deploying daemon rgw.rgw.compute-0.xojlng on compute-0
Oct 11 04:27:46 compute-0 podman[99713]: 2025-10-11 04:27:46.431703172 +0000 UTC m=+0.063952675 container create 468509ade86d4e0bedbff6d56a11fbfb1873cb4e39438031a78beabf7ed0ae46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-rgw-rgw-compute-0-xojlng, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:27:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85ac1d74484e958d8dbd3ebc34518a32463fc51941c7c0886915f3b72bc054fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85ac1d74484e958d8dbd3ebc34518a32463fc51941c7c0886915f3b72bc054fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85ac1d74484e958d8dbd3ebc34518a32463fc51941c7c0886915f3b72bc054fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85ac1d74484e958d8dbd3ebc34518a32463fc51941c7c0886915f3b72bc054fa/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.xojlng supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:46 compute-0 podman[99713]: 2025-10-11 04:27:46.404061087 +0000 UTC m=+0.036310590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:46 compute-0 podman[99713]: 2025-10-11 04:27:46.507803956 +0000 UTC m=+0.140053519 container init 468509ade86d4e0bedbff6d56a11fbfb1873cb4e39438031a78beabf7ed0ae46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-rgw-rgw-compute-0-xojlng, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:46 compute-0 podman[99713]: 2025-10-11 04:27:46.517461846 +0000 UTC m=+0.149711359 container start 468509ade86d4e0bedbff6d56a11fbfb1873cb4e39438031a78beabf7ed0ae46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-rgw-rgw-compute-0-xojlng, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:46 compute-0 bash[99713]: 468509ade86d4e0bedbff6d56a11fbfb1873cb4e39438031a78beabf7ed0ae46
Oct 11 04:27:46 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.xojlng for 166d0489-2ae7-59eb-961c-c1b5cda4b45a.
Oct 11 04:27:46 compute-0 sudo[99410]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:46 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:27:46 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:46 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:27:46 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:46 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 11 04:27:46 compute-0 radosgw[99732]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct 11 04:27:46 compute-0 radosgw[99732]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Oct 11 04:27:46 compute-0 radosgw[99732]: framework: beast
Oct 11 04:27:46 compute-0 radosgw[99732]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct 11 04:27:46 compute-0 radosgw[99732]: init_numa not setting numa affinity
Oct 11 04:27:46 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:46 compute-0 ceph-mgr[74542]: [progress INFO root] complete: finished ev 933b04f4-a124-4774-9b27-5c97cefba307 (Updating rgw.rgw deployment (+1 -> 1))
Oct 11 04:27:46 compute-0 ceph-mgr[74542]: [progress INFO root] Completed event 933b04f4-a124-4774-9b27-5c97cefba307 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Oct 11 04:27:46 compute-0 ceph-mgr[74542]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Oct 11 04:27:46 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct 11 04:27:46 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 11 04:27:46 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:46 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 11 04:27:46 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:46 compute-0 ceph-mgr[74542]: [progress INFO root] update: starting ev 907aec57-6444-41a5-b644-5dd6071133ac (Updating mds.cephfs deployment (+1 -> 1))
Oct 11 04:27:46 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jbpltj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Oct 11 04:27:46 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jbpltj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 11 04:27:46 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jbpltj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 11 04:27:46 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:27:46 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:46 compute-0 ceph-mgr[74542]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.jbpltj on compute-0
Oct 11 04:27:46 compute-0 ceph-mgr[74542]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.jbpltj on compute-0
Oct 11 04:27:46 compute-0 sudo[99794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:46 compute-0 sudo[99794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:46 compute-0 sudo[99794]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:46 compute-0 sudo[99863]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byrnbnvoaljdtbxgnazpvtfmtdcgzwbh ; /usr/bin/python3'
Oct 11 04:27:46 compute-0 sudo[99863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:46 compute-0 sudo[99820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:46 compute-0 sudo[99820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:46 compute-0 sudo[99820]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:46 compute-0 sudo[99870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:46 compute-0 sudo[99870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:46 compute-0 sudo[99870]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:46 compute-0 sudo[99895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a
Oct 11 04:27:46 compute-0 sudo[99895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:46 compute-0 python3[99867]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:47 compute-0 podman[99920]: 2025-10-11 04:27:47.068960014 +0000 UTC m=+0.081193512 container create ab1e7d74714d9b14ee1611122f9fb3f24da1dadd678f0cf10592945af6e907af (image=quay.io/ceph/ceph:v18, name=gracious_satoshi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:47 compute-0 podman[99920]: 2025-10-11 04:27:47.030687836 +0000 UTC m=+0.042921404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:47 compute-0 systemd[1]: Started libpod-conmon-ab1e7d74714d9b14ee1611122f9fb3f24da1dadd678f0cf10592945af6e907af.scope.
Oct 11 04:27:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93215fd8bbd113d1baa6550cc3081c278339944a6ae5cf5b8c8303d093875b5c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93215fd8bbd113d1baa6550cc3081c278339944a6ae5cf5b8c8303d093875b5c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:47 compute-0 podman[99920]: 2025-10-11 04:27:47.182832084 +0000 UTC m=+0.195065582 container init ab1e7d74714d9b14ee1611122f9fb3f24da1dadd678f0cf10592945af6e907af (image=quay.io/ceph/ceph:v18, name=gracious_satoshi, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:47 compute-0 podman[99920]: 2025-10-11 04:27:47.193842726 +0000 UTC m=+0.206076204 container start ab1e7d74714d9b14ee1611122f9fb3f24da1dadd678f0cf10592945af6e907af (image=quay.io/ceph/ceph:v18, name=gracious_satoshi, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:47 compute-0 podman[99920]: 2025-10-11 04:27:47.197493857 +0000 UTC m=+0.209727335 container attach ab1e7d74714d9b14ee1611122f9fb3f24da1dadd678f0cf10592945af6e907af (image=quay.io/ceph/ceph:v18, name=gracious_satoshi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:47 compute-0 ceph-mon[74243]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 04:27:47 compute-0 ceph-mon[74243]: pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:47 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:47 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:47 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:47 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:47 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:47 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jbpltj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 11 04:27:47 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jbpltj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 11 04:27:47 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:47 compute-0 podman[99982]: 2025-10-11 04:27:47.38902175 +0000 UTC m=+0.053782483 container create b2b480e8dd84a8fa2d1fb1315e3506a683bb4190156d91604076a12e4b0a7d49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:47 compute-0 systemd[1]: Started libpod-conmon-b2b480e8dd84a8fa2d1fb1315e3506a683bb4190156d91604076a12e4b0a7d49.scope.
Oct 11 04:27:47 compute-0 podman[99982]: 2025-10-11 04:27:47.361232082 +0000 UTC m=+0.025992855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:47 compute-0 podman[99982]: 2025-10-11 04:27:47.480321821 +0000 UTC m=+0.145082604 container init b2b480e8dd84a8fa2d1fb1315e3506a683bb4190156d91604076a12e4b0a7d49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chandrasekhar, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 04:27:47 compute-0 podman[99982]: 2025-10-11 04:27:47.489841807 +0000 UTC m=+0.154602540 container start b2b480e8dd84a8fa2d1fb1315e3506a683bb4190156d91604076a12e4b0a7d49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:27:47 compute-0 podman[99982]: 2025-10-11 04:27:47.493649371 +0000 UTC m=+0.158410164 container attach b2b480e8dd84a8fa2d1fb1315e3506a683bb4190156d91604076a12e4b0a7d49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chandrasekhar, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:27:47 compute-0 beautiful_chandrasekhar[99999]: 167 167
Oct 11 04:27:47 compute-0 systemd[1]: libpod-b2b480e8dd84a8fa2d1fb1315e3506a683bb4190156d91604076a12e4b0a7d49.scope: Deactivated successfully.
Oct 11 04:27:47 compute-0 conmon[99999]: conmon b2b480e8dd84a8fa2d1f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b2b480e8dd84a8fa2d1fb1315e3506a683bb4190156d91604076a12e4b0a7d49.scope/container/memory.events
Oct 11 04:27:47 compute-0 podman[99982]: 2025-10-11 04:27:47.49765482 +0000 UTC m=+0.162415543 container died b2b480e8dd84a8fa2d1fb1315e3506a683bb4190156d91604076a12e4b0a7d49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a3494798cf6002edc3bf1e20e787301662687731b841b2cbcb4db5d8f77c293-merged.mount: Deactivated successfully.
Oct 11 04:27:47 compute-0 podman[99982]: 2025-10-11 04:27:47.55417233 +0000 UTC m=+0.218933053 container remove b2b480e8dd84a8fa2d1fb1315e3506a683bb4190156d91604076a12e4b0a7d49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chandrasekhar, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:27:47 compute-0 systemd[1]: libpod-conmon-b2b480e8dd84a8fa2d1fb1315e3506a683bb4190156d91604076a12e4b0a7d49.scope: Deactivated successfully.
Oct 11 04:27:47 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct 11 04:27:47 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Oct 11 04:27:47 compute-0 systemd[1]: Reloading.
Oct 11 04:27:47 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Oct 11 04:27:47 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Oct 11 04:27:47 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3354299006' entity='client.rgw.rgw.compute-0.xojlng' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 11 04:27:47 compute-0 systemd-rc-local-generator[100062]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:27:47 compute-0 systemd-sysv-generator[100067]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:27:47 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14265 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 04:27:47 compute-0 gracious_satoshi[99943]: 
Oct 11 04:27:47 compute-0 gracious_satoshi[99943]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct 11 04:27:47 compute-0 podman[99920]: 2025-10-11 04:27:47.874753099 +0000 UTC m=+0.886986607 container died ab1e7d74714d9b14ee1611122f9fb3f24da1dadd678f0cf10592945af6e907af (image=quay.io/ceph/ceph:v18, name=gracious_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:47 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 32 pg[8.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:47 compute-0 systemd[1]: libpod-ab1e7d74714d9b14ee1611122f9fb3f24da1dadd678f0cf10592945af6e907af.scope: Deactivated successfully.
Oct 11 04:27:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-93215fd8bbd113d1baa6550cc3081c278339944a6ae5cf5b8c8303d093875b5c-merged.mount: Deactivated successfully.
Oct 11 04:27:47 compute-0 podman[99920]: 2025-10-11 04:27:47.961625621 +0000 UTC m=+0.973859109 container remove ab1e7d74714d9b14ee1611122f9fb3f24da1dadd678f0cf10592945af6e907af (image=quay.io/ceph/ceph:v18, name=gracious_satoshi, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:27:47 compute-0 systemd[1]: libpod-conmon-ab1e7d74714d9b14ee1611122f9fb3f24da1dadd678f0cf10592945af6e907af.scope: Deactivated successfully.
Oct 11 04:27:47 compute-0 systemd[1]: Reloading.
Oct 11 04:27:47 compute-0 sudo[99863]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v79: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:48 compute-0 systemd-sysv-generator[100117]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:27:48 compute-0 systemd-rc-local-generator[100110]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:27:48 compute-0 ansible-async_wrapper.py[99097]: Done in kid B.
Oct 11 04:27:48 compute-0 ceph-mon[74243]: Saving service rgw.rgw spec with placement compute-0
Oct 11 04:27:48 compute-0 ceph-mon[74243]: Deploying daemon mds.cephfs.compute-0.jbpltj on compute-0
Oct 11 04:27:48 compute-0 ceph-mon[74243]: osdmap e32: 3 total, 3 up, 3 in
Oct 11 04:27:48 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3354299006' entity='client.rgw.rgw.compute-0.xojlng' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 11 04:27:48 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.jbpltj for 166d0489-2ae7-59eb-961c-c1b5cda4b45a...
Oct 11 04:27:48 compute-0 podman[100177]: 2025-10-11 04:27:48.575310678 +0000 UTC m=+0.073049229 container create 09dc062a447a63cf75cf20247e5a64ee60183024546dec8357770bdd9474fe97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mds-cephfs-compute-0-jbpltj, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct 11 04:27:48 compute-0 ceph-mon[74243]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 04:27:48 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3354299006' entity='client.rgw.rgw.compute-0.xojlng' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 11 04:27:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Oct 11 04:27:48 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Oct 11 04:27:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605dab44d703bc018f6fee10b623818e290f9b3c55bcf305b428b4d4e6f0784b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605dab44d703bc018f6fee10b623818e290f9b3c55bcf305b428b4d4e6f0784b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605dab44d703bc018f6fee10b623818e290f9b3c55bcf305b428b4d4e6f0784b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605dab44d703bc018f6fee10b623818e290f9b3c55bcf305b428b4d4e6f0784b/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.jbpltj supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:48 compute-0 podman[100177]: 2025-10-11 04:27:48.545250574 +0000 UTC m=+0.042989145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:48 compute-0 podman[100177]: 2025-10-11 04:27:48.666503856 +0000 UTC m=+0.164242467 container init 09dc062a447a63cf75cf20247e5a64ee60183024546dec8357770bdd9474fe97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mds-cephfs-compute-0-jbpltj, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:27:48 compute-0 podman[100177]: 2025-10-11 04:27:48.679233032 +0000 UTC m=+0.176971593 container start 09dc062a447a63cf75cf20247e5a64ee60183024546dec8357770bdd9474fe97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mds-cephfs-compute-0-jbpltj, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:48 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 33 pg[8.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:48 compute-0 bash[100177]: 09dc062a447a63cf75cf20247e5a64ee60183024546dec8357770bdd9474fe97
Oct 11 04:27:48 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.jbpltj for 166d0489-2ae7-59eb-961c-c1b5cda4b45a.
Oct 11 04:27:48 compute-0 sudo[99895]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:27:48 compute-0 ceph-mds[100196]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 04:27:48 compute-0 ceph-mds[100196]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Oct 11 04:27:48 compute-0 ceph-mds[100196]: main not setting numa affinity
Oct 11 04:27:48 compute-0 ceph-mds[100196]: pidfile_write: ignore empty --pid-file
Oct 11 04:27:48 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mds-cephfs-compute-0-jbpltj[100192]: starting mds.cephfs.compute-0.jbpltj at 
Oct 11 04:27:48 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:27:48 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj Updating MDS map to version 2 from mon.0
Oct 11 04:27:48 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 11 04:27:48 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:48 compute-0 ceph-mgr[74542]: [progress INFO root] complete: finished ev 907aec57-6444-41a5-b644-5dd6071133ac (Updating mds.cephfs deployment (+1 -> 1))
Oct 11 04:27:48 compute-0 ceph-mgr[74542]: [progress INFO root] Completed event 907aec57-6444-41a5-b644-5dd6071133ac (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Oct 11 04:27:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Oct 11 04:27:48 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 11 04:27:48 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:48 compute-0 sudo[100217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:48 compute-0 sudo[100217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:48 compute-0 sudo[100217]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:48 compute-0 sudo[100286]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hluoxnncschwgfgcusdxvsmuwjwcojlo ; /usr/bin/python3'
Oct 11 04:27:48 compute-0 sudo[100286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:48 compute-0 sudo[100248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:27:48 compute-0 sudo[100248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:49 compute-0 sudo[100248]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:49 compute-0 sudo[100293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:49 compute-0 sudo[100293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:49 compute-0 sudo[100293]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:49 compute-0 python3[100291]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:49 compute-0 sudo[100318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:49 compute-0 sudo[100318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:49 compute-0 sudo[100318]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:49 compute-0 ceph-mon[74243]: from='client.14265 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 04:27:49 compute-0 ceph-mon[74243]: pgmap v79: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:49 compute-0 ceph-mon[74243]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 04:27:49 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3354299006' entity='client.rgw.rgw.compute-0.xojlng' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 11 04:27:49 compute-0 ceph-mon[74243]: osdmap e33: 3 total, 3 up, 3 in
Oct 11 04:27:49 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:49 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:49 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:49 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:49 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:49 compute-0 podman[100341]: 2025-10-11 04:27:49.241320782 +0000 UTC m=+0.050390839 container create a45ac052ba3dcd952848286ffa556d14a441a75167794c0d1add25fa94e9cbf5 (image=quay.io/ceph/ceph:v18, name=gracious_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:27:49 compute-0 systemd[1]: Started libpod-conmon-a45ac052ba3dcd952848286ffa556d14a441a75167794c0d1add25fa94e9cbf5.scope.
Oct 11 04:27:49 compute-0 sudo[100347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:49 compute-0 sudo[100347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:49 compute-0 sudo[100347]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:49 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:49 compute-0 podman[100341]: 2025-10-11 04:27:49.223531431 +0000 UTC m=+0.032601488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f4885569e52f6a687685358a36cb539f920894ccff66382a1411d0554cddefd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f4885569e52f6a687685358a36cb539f920894ccff66382a1411d0554cddefd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:49 compute-0 podman[100341]: 2025-10-11 04:27:49.338088558 +0000 UTC m=+0.147158615 container init a45ac052ba3dcd952848286ffa556d14a441a75167794c0d1add25fa94e9cbf5 (image=quay.io/ceph/ceph:v18, name=gracious_mclaren, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:27:49 compute-0 podman[100341]: 2025-10-11 04:27:49.353698815 +0000 UTC m=+0.162768872 container start a45ac052ba3dcd952848286ffa556d14a441a75167794c0d1add25fa94e9cbf5 (image=quay.io/ceph/ceph:v18, name=gracious_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:49 compute-0 podman[100341]: 2025-10-11 04:27:49.356877994 +0000 UTC m=+0.165948051 container attach a45ac052ba3dcd952848286ffa556d14a441a75167794c0d1add25fa94e9cbf5 (image=quay.io/ceph/ceph:v18, name=gracious_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:27:49 compute-0 sudo[100386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 04:27:49 compute-0 sudo[100386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct 11 04:27:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Oct 11 04:27:49 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Oct 11 04:27:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct 11 04:27:49 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3354299006' entity='client.rgw.rgw.compute-0.xojlng' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 11 04:27:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).mds e3 new map
Oct 11 04:27:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-11T04:27:33.571866+0000
                                           modified        2025-10-11T04:27:33.571921+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.jbpltj{-1:14267} state up:standby seq 1 addr [v2:192.168.122.100:6814/207512141,v1:192.168.122.100:6815/207512141] compat {c=[1],r=[1],i=[7ff]}]
Oct 11 04:27:49 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj Updating MDS map to version 3 from mon.0
Oct 11 04:27:49 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj Monitors have assigned me to become a standby.
Oct 11 04:27:49 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/207512141,v1:192.168.122.100:6815/207512141] up:boot
Oct 11 04:27:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/207512141,v1:192.168.122.100:6815/207512141] as mds.0
Oct 11 04:27:49 compute-0 ceph-mon[74243]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.jbpltj assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 11 04:27:49 compute-0 ceph-mon[74243]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 11 04:27:49 compute-0 ceph-mon[74243]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 11 04:27:49 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Oct 11 04:27:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.jbpltj"} v 0) v1
Oct 11 04:27:49 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.jbpltj"}]: dispatch
Oct 11 04:27:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).mds e3 all = 0
Oct 11 04:27:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).mds e4 new map
Oct 11 04:27:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-11T04:27:33.571866+0000
                                           modified        2025-10-11T04:27:49.757171+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14267}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.jbpltj{0:14267} state up:creating seq 1 addr [v2:192.168.122.100:6814/207512141,v1:192.168.122.100:6815/207512141] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Oct 11 04:27:49 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.jbpltj=up:creating}
Oct 11 04:27:49 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj Updating MDS map to version 4 from mon.0
Oct 11 04:27:49 compute-0 ceph-mds[100196]: mds.0.4 handle_mds_map i am now mds.0.4
Oct 11 04:27:49 compute-0 ceph-mds[100196]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Oct 11 04:27:49 compute-0 ceph-mds[100196]: mds.0.cache creating system inode with ino:0x1
Oct 11 04:27:49 compute-0 ceph-mds[100196]: mds.0.cache creating system inode with ino:0x100
Oct 11 04:27:49 compute-0 ceph-mds[100196]: mds.0.cache creating system inode with ino:0x600
Oct 11 04:27:49 compute-0 ceph-mds[100196]: mds.0.cache creating system inode with ino:0x601
Oct 11 04:27:49 compute-0 ceph-mds[100196]: mds.0.cache creating system inode with ino:0x602
Oct 11 04:27:49 compute-0 ceph-mds[100196]: mds.0.cache creating system inode with ino:0x603
Oct 11 04:27:49 compute-0 ceph-mds[100196]: mds.0.cache creating system inode with ino:0x604
Oct 11 04:27:49 compute-0 ceph-mds[100196]: mds.0.cache creating system inode with ino:0x605
Oct 11 04:27:49 compute-0 ceph-mds[100196]: mds.0.cache creating system inode with ino:0x606
Oct 11 04:27:49 compute-0 ceph-mds[100196]: mds.0.cache creating system inode with ino:0x607
Oct 11 04:27:49 compute-0 ceph-mds[100196]: mds.0.cache creating system inode with ino:0x608
Oct 11 04:27:49 compute-0 ceph-mds[100196]: mds.0.cache creating system inode with ino:0x609
Oct 11 04:27:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:49 compute-0 ceph-mds[100196]: mds.0.4 creating_done
Oct 11 04:27:49 compute-0 ceph-mon[74243]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.jbpltj is now active in filesystem cephfs as rank 0
Oct 11 04:27:49 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 34 pg[9.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:49 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 04:27:49 compute-0 gracious_mclaren[100382]: 
Oct 11 04:27:49 compute-0 gracious_mclaren[100382]: [{"container_id": "3e54e7d387a6", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.60%", "created": "2025-10-11T04:26:23.647869Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-10-11T04:26:23.709382Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T04:27:38.122591Z", "memory_usage": 11618222, "ports": [], "service_name": "crash", "started": "2025-10-11T04:26:23.505454Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a@crash.compute-0", "version": "18.2.7"}, {"daemon_id": "cephfs.compute-0.jbpltj", "daemon_name": "mds.cephfs.compute-0.jbpltj", "daemon_type": "mds", "events": ["2025-10-11T04:27:48.753298Z daemon:mds.cephfs.compute-0.jbpltj [INFO] \"Deployed mds.cephfs.compute-0.jbpltj on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "bf5fc967f473", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "30.99%", "created": "2025-10-11T04:25:12.426463Z", "daemon_id": "compute-0.phooxi", "daemon_name": "mgr.compute-0.phooxi", "daemon_type": "mgr", "events": ["2025-10-11T04:27:17.153969Z daemon:mgr.compute-0.phooxi [INFO] \"Reconfigured mgr.compute-0.phooxi on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T04:27:38.122533Z", "memory_usage": 547251814, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-11T04:25:12.282787Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a@mgr.compute-0.phooxi", "version": "18.2.7"}, {"container_id": "9e6c9a4e99dc", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.27%", "created": "2025-10-11T04:25:06.155287Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-10-11T04:27:16.266018Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T04:27:38.122450Z", "memory_request": 2147483648, "memory_usage": 37287362, "ports": [], "service_name": "mon", "started": "2025-10-11T04:25:09.558775Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a@mon.compute-0", "version": "18.2.7"}, {"container_id": "b9a5ba754576", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.07%", "created": "2025-10-11T04:26:49.371509Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-10-11T04:26:49.424958Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T04:27:38.122649Z", "memory_request": 4294967296, "memory_usage": 55637442, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-11T04:26:49.267987Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a@osd.0", "version": "18.2.7"}, {"container_id": "f8a086cc51d4", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.37%", "created": "2025-10-11T04:26:54.422635Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-10-11T04:26:54.568500Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T04:27:38.122705Z", "memory_request": 4294967296, "memory_usage": 57283706, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-11T04:26:54.279374Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a@osd.1", "version": "18.2.7"}, {"container_id": "59da2b9b4ac8", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.67%", "created": "2025-10-11T04:26:59.842770Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-10-11T04:26:59.954432Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T04:27:38.122760Z", "memory_request": 4294967296, "memory_usage": 56518246, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-11T04:26:59.663087Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-0.xojlng", "daemon_name": "rgw.rgw.compute-0.xojlng", "daemon_type": "rgw", "events": ["2025-10-11T04:27:46.593805Z daemon:rgw.rgw.compute-0.xojlng [INFO] \"Deployed rgw.rgw.compute-0.xojlng on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Oct 11 04:27:49 compute-0 systemd[1]: libpod-a45ac052ba3dcd952848286ffa556d14a441a75167794c0d1add25fa94e9cbf5.scope: Deactivated successfully.
Oct 11 04:27:49 compute-0 podman[100341]: 2025-10-11 04:27:49.982726193 +0000 UTC m=+0.791796260 container died a45ac052ba3dcd952848286ffa556d14a441a75167794c0d1add25fa94e9cbf5 (image=quay.io/ceph/ceph:v18, name=gracious_mclaren, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f4885569e52f6a687685358a36cb539f920894ccff66382a1411d0554cddefd-merged.mount: Deactivated successfully.
Oct 11 04:27:50 compute-0 podman[100341]: 2025-10-11 04:27:50.041829787 +0000 UTC m=+0.850899834 container remove a45ac052ba3dcd952848286ffa556d14a441a75167794c0d1add25fa94e9cbf5 (image=quay.io/ceph/ceph:v18, name=gracious_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:27:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v82: 9 pgs: 2 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:50 compute-0 systemd[1]: libpod-conmon-a45ac052ba3dcd952848286ffa556d14a441a75167794c0d1add25fa94e9cbf5.scope: Deactivated successfully.
Oct 11 04:27:50 compute-0 sudo[100286]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:50 compute-0 podman[100527]: 2025-10-11 04:27:50.121782857 +0000 UTC m=+0.090683457 container exec 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:50 compute-0 podman[100527]: 2025-10-11 04:27:50.242847955 +0000 UTC m=+0.211748515 container exec_died 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:27:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct 11 04:27:50 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3354299006' entity='client.rgw.rgw.compute-0.xojlng' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 11 04:27:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Oct 11 04:27:50 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Oct 11 04:27:50 compute-0 ceph-mon[74243]: osdmap e34: 3 total, 3 up, 3 in
Oct 11 04:27:50 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3354299006' entity='client.rgw.rgw.compute-0.xojlng' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 11 04:27:50 compute-0 ceph-mon[74243]: mds.? [v2:192.168.122.100:6814/207512141,v1:192.168.122.100:6815/207512141] up:boot
Oct 11 04:27:50 compute-0 ceph-mon[74243]: daemon mds.cephfs.compute-0.jbpltj assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 11 04:27:50 compute-0 ceph-mon[74243]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 11 04:27:50 compute-0 ceph-mon[74243]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 11 04:27:50 compute-0 ceph-mon[74243]: fsmap cephfs:0 1 up:standby
Oct 11 04:27:50 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.jbpltj"}]: dispatch
Oct 11 04:27:50 compute-0 ceph-mon[74243]: fsmap cephfs:1 {0=cephfs.compute-0.jbpltj=up:creating}
Oct 11 04:27:50 compute-0 ceph-mon[74243]: daemon mds.cephfs.compute-0.jbpltj is now active in filesystem cephfs as rank 0
Oct 11 04:27:50 compute-0 ceph-mon[74243]: from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 04:27:50 compute-0 ceph-mon[74243]: pgmap v82: 9 pgs: 2 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:50 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 35 pg[9.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).mds e5 new map
Oct 11 04:27:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-11T04:27:33.571866+0000
                                           modified        2025-10-11T04:27:50.766559+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14267}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.jbpltj{0:14267} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/207512141,v1:192.168.122.100:6815/207512141] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Oct 11 04:27:50 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj Updating MDS map to version 5 from mon.0
Oct 11 04:27:50 compute-0 ceph-mds[100196]: mds.0.4 handle_mds_map i am now mds.0.4
Oct 11 04:27:50 compute-0 ceph-mds[100196]: mds.0.4 handle_mds_map state change up:creating --> up:active
Oct 11 04:27:50 compute-0 ceph-mds[100196]: mds.0.4 recovery_done -- successful recovery!
Oct 11 04:27:50 compute-0 ceph-mds[100196]: mds.0.4 active_start
Oct 11 04:27:50 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/207512141,v1:192.168.122.100:6815/207512141] up:active
Oct 11 04:27:50 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.jbpltj=up:active}
Oct 11 04:27:50 compute-0 sudo[100679]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbswbzyenhffvyyhuykqjqxhhidukhxo ; /usr/bin/python3'
Oct 11 04:27:50 compute-0 sudo[100679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:51 compute-0 python3[100688]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:51 compute-0 ceph-mgr[74542]: [progress INFO root] Writing back 5 completed events
Oct 11 04:27:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 11 04:27:51 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:51 compute-0 sudo[100386]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:27:51 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:27:51 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:27:51 compute-0 podman[100714]: 2025-10-11 04:27:51.22627222 +0000 UTC m=+0.073142653 container create b6e9e34415297d6185f85637162a3a4df32e2573933f431a2a13dfe6204d0e2a (image=quay.io/ceph/ceph:v18, name=dazzling_booth, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:51 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:27:51 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:27:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:27:51 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:51 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 4c1ac0ff-cad7-406b-b725-20216634c4d7 does not exist
Oct 11 04:27:51 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev d18a832a-5047-4499-ba8a-a8d6a4651dd2 does not exist
Oct 11 04:27:51 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 6dbf5f47-b66b-42bf-826c-fe7fd4d7f98c does not exist
Oct 11 04:27:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:27:51 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:27:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:27:51 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:27:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:27:51 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:51 compute-0 systemd[1]: Started libpod-conmon-b6e9e34415297d6185f85637162a3a4df32e2573933f431a2a13dfe6204d0e2a.scope.
Oct 11 04:27:51 compute-0 podman[100714]: 2025-10-11 04:27:51.190818362 +0000 UTC m=+0.037688835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:51 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32767a72c3fe18b3651815103b6f857e5edbba1665a19202907d09e897b7aeba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32767a72c3fe18b3651815103b6f857e5edbba1665a19202907d09e897b7aeba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:51 compute-0 podman[100714]: 2025-10-11 04:27:51.324109653 +0000 UTC m=+0.170980176 container init b6e9e34415297d6185f85637162a3a4df32e2573933f431a2a13dfe6204d0e2a (image=quay.io/ceph/ceph:v18, name=dazzling_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:51 compute-0 sudo[100730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:51 compute-0 sudo[100730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:51 compute-0 sudo[100730]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:51 compute-0 podman[100714]: 2025-10-11 04:27:51.336598222 +0000 UTC m=+0.183468695 container start b6e9e34415297d6185f85637162a3a4df32e2573933f431a2a13dfe6204d0e2a (image=quay.io/ceph/ceph:v18, name=dazzling_booth, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 04:27:51 compute-0 podman[100714]: 2025-10-11 04:27:51.340892538 +0000 UTC m=+0.187763001 container attach b6e9e34415297d6185f85637162a3a4df32e2573933f431a2a13dfe6204d0e2a (image=quay.io/ceph/ceph:v18, name=dazzling_booth, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:27:51 compute-0 sudo[100759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:51 compute-0 sudo[100759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:51 compute-0 sudo[100759]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:51 compute-0 sudo[100784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:51 compute-0 sudo[100784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:51 compute-0 sudo[100784]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:51 compute-0 sudo[100809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:27:51 compute-0 sudo[100809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct 11 04:27:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct 11 04:27:51 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct 11 04:27:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct 11 04:27:51 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3354299006' entity='client.rgw.rgw.compute-0.xojlng' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 11 04:27:51 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3354299006' entity='client.rgw.rgw.compute-0.xojlng' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 11 04:27:51 compute-0 ceph-mon[74243]: osdmap e35: 3 total, 3 up, 3 in
Oct 11 04:27:51 compute-0 ceph-mon[74243]: mds.? [v2:192.168.122.100:6814/207512141,v1:192.168.122.100:6815/207512141] up:active
Oct 11 04:27:51 compute-0 ceph-mon[74243]: fsmap cephfs:1 {0=cephfs.compute-0.jbpltj=up:active}
Oct 11 04:27:51 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:51 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:51 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:51 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:51 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:27:51 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:51 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:27:51 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:27:51 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:51 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 36 pg[10.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [2] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 11 04:27:51 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/458379048' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 04:27:51 compute-0 dazzling_booth[100740]: 
Oct 11 04:27:51 compute-0 dazzling_booth[100740]: {"fsid":"166d0489-2ae7-59eb-961c-c1b5cda4b45a","health":{"status":"HEALTH_WARN","checks":{"POOL_APP_NOT_ENABLED":{"severity":"HEALTH_WARN","summary":{"message":"1 pool(s) do not have an application enabled","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":162,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":36,"num_osds":3,"num_up_osds":3,"osd_up_since":1760156827,"num_in_osds":3,"osd_in_since":1760156798,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7},{"state_name":"unknown","count":2}],"num_pgs":9,"num_pools":9,"num_objects":2,"data_bytes":459280,"bytes_used":83845120,"bytes_avail":64328081408,"bytes_total":64411926528,"unknown_pgs_ratio":0.2222222238779068},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.jbpltj","status":"up:active","gid":14267}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-11T04:26:58.031384+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Oct 11 04:27:51 compute-0 systemd[1]: libpod-b6e9e34415297d6185f85637162a3a4df32e2573933f431a2a13dfe6204d0e2a.scope: Deactivated successfully.
Oct 11 04:27:51 compute-0 podman[100879]: 2025-10-11 04:27:51.939684048 +0000 UTC m=+0.041569491 container died b6e9e34415297d6185f85637162a3a4df32e2573933f431a2a13dfe6204d0e2a (image=quay.io/ceph/ceph:v18, name=dazzling_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:27:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-32767a72c3fe18b3651815103b6f857e5edbba1665a19202907d09e897b7aeba-merged.mount: Deactivated successfully.
Oct 11 04:27:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v85: 10 pgs: 1 unknown, 9 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Oct 11 04:27:52 compute-0 podman[100895]: 2025-10-11 04:27:52.050214405 +0000 UTC m=+0.095471885 container remove b6e9e34415297d6185f85637162a3a4df32e2573933f431a2a13dfe6204d0e2a (image=quay.io/ceph/ceph:v18, name=dazzling_booth, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:52 compute-0 systemd[1]: libpod-conmon-b6e9e34415297d6185f85637162a3a4df32e2573933f431a2a13dfe6204d0e2a.scope: Deactivated successfully.
Oct 11 04:27:52 compute-0 podman[100901]: 2025-10-11 04:27:52.070963399 +0000 UTC m=+0.084095274 container create 6de0404b21ad17aa82a12ea4d035023e6b23ee36f6396a35f8d02fe71d06cd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_tesla, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:52 compute-0 sudo[100679]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:52 compute-0 systemd[1]: Started libpod-conmon-6de0404b21ad17aa82a12ea4d035023e6b23ee36f6396a35f8d02fe71d06cd02.scope.
Oct 11 04:27:52 compute-0 podman[100901]: 2025-10-11 04:27:52.028803785 +0000 UTC m=+0.041935740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:52 compute-0 podman[100901]: 2025-10-11 04:27:52.160034764 +0000 UTC m=+0.173166659 container init 6de0404b21ad17aa82a12ea4d035023e6b23ee36f6396a35f8d02fe71d06cd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_tesla, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:52 compute-0 podman[100901]: 2025-10-11 04:27:52.168464803 +0000 UTC m=+0.181596698 container start 6de0404b21ad17aa82a12ea4d035023e6b23ee36f6396a35f8d02fe71d06cd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:52 compute-0 podman[100901]: 2025-10-11 04:27:52.172087942 +0000 UTC m=+0.185219817 container attach 6de0404b21ad17aa82a12ea4d035023e6b23ee36f6396a35f8d02fe71d06cd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_tesla, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:27:52 compute-0 stoic_tesla[100927]: 167 167
Oct 11 04:27:52 compute-0 systemd[1]: libpod-6de0404b21ad17aa82a12ea4d035023e6b23ee36f6396a35f8d02fe71d06cd02.scope: Deactivated successfully.
Oct 11 04:27:52 compute-0 podman[100901]: 2025-10-11 04:27:52.175974689 +0000 UTC m=+0.189106614 container died 6de0404b21ad17aa82a12ea4d035023e6b23ee36f6396a35f8d02fe71d06cd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_tesla, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:27:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fa9b20b4968fc608c5e4b24474563dee7f2a01ea742b802e2b34e268a6ca650-merged.mount: Deactivated successfully.
Oct 11 04:27:52 compute-0 podman[100901]: 2025-10-11 04:27:52.223568207 +0000 UTC m=+0.236700112 container remove 6de0404b21ad17aa82a12ea4d035023e6b23ee36f6396a35f8d02fe71d06cd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:52 compute-0 systemd[1]: libpod-conmon-6de0404b21ad17aa82a12ea4d035023e6b23ee36f6396a35f8d02fe71d06cd02.scope: Deactivated successfully.
Oct 11 04:27:52 compute-0 podman[100951]: 2025-10-11 04:27:52.465976481 +0000 UTC m=+0.075572643 container create 66270fa0565ea1e99ee439b9abb84ed4baa06564da268612ffff54475c8d412c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_visvesvaraya, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:52 compute-0 podman[100951]: 2025-10-11 04:27:52.434253765 +0000 UTC m=+0.043849957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:52 compute-0 systemd[1]: Started libpod-conmon-66270fa0565ea1e99ee439b9abb84ed4baa06564da268612ffff54475c8d412c.scope.
Oct 11 04:27:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfcde513ad363065b5b50e86a58ccc09162e07c906075c01c64a7feae93784b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfcde513ad363065b5b50e86a58ccc09162e07c906075c01c64a7feae93784b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfcde513ad363065b5b50e86a58ccc09162e07c906075c01c64a7feae93784b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfcde513ad363065b5b50e86a58ccc09162e07c906075c01c64a7feae93784b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfcde513ad363065b5b50e86a58ccc09162e07c906075c01c64a7feae93784b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:52 compute-0 podman[100951]: 2025-10-11 04:27:52.591837087 +0000 UTC m=+0.201433309 container init 66270fa0565ea1e99ee439b9abb84ed4baa06564da268612ffff54475c8d412c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:52 compute-0 podman[100951]: 2025-10-11 04:27:52.60932344 +0000 UTC m=+0.218919562 container start 66270fa0565ea1e99ee439b9abb84ed4baa06564da268612ffff54475c8d412c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_visvesvaraya, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 04:27:52 compute-0 podman[100951]: 2025-10-11 04:27:52.613013081 +0000 UTC m=+0.222609323 container attach 66270fa0565ea1e99ee439b9abb84ed4baa06564da268612ffff54475c8d412c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:27:52 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct 11 04:27:52 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3354299006' entity='client.rgw.rgw.compute-0.xojlng' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 11 04:27:52 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct 11 04:27:52 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct 11 04:27:52 compute-0 ceph-mon[74243]: osdmap e36: 3 total, 3 up, 3 in
Oct 11 04:27:52 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3354299006' entity='client.rgw.rgw.compute-0.xojlng' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 11 04:27:52 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/458379048' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 04:27:52 compute-0 ceph-mon[74243]: pgmap v85: 10 pgs: 1 unknown, 9 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Oct 11 04:27:52 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3354299006' entity='client.rgw.rgw.compute-0.xojlng' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 11 04:27:52 compute-0 ceph-mon[74243]: osdmap e37: 3 total, 3 up, 3 in
Oct 11 04:27:52 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 37 pg[10.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [2] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:52 compute-0 sudo[101009]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwcpfvwrbtnnsyaoiyusqrwenrxrvhtv ; /usr/bin/python3'
Oct 11 04:27:52 compute-0 sudo[101009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:53 compute-0 python3[101011]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:53 compute-0 podman[101012]: 2025-10-11 04:27:53.139248354 +0000 UTC m=+0.058345616 container create 981c8694f9c3b446bdc3d00fc84d2dedfbe79d72ea133d021c6db66a1b5e1bc5 (image=quay.io/ceph/ceph:v18, name=optimistic_newton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 04:27:53 compute-0 systemd[1]: Started libpod-conmon-981c8694f9c3b446bdc3d00fc84d2dedfbe79d72ea133d021c6db66a1b5e1bc5.scope.
Oct 11 04:27:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5169d4b43e35b87383cf6f4cd977098b9c2094deecb104b19053cd1014e0f50d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5169d4b43e35b87383cf6f4cd977098b9c2094deecb104b19053cd1014e0f50d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:53 compute-0 podman[101012]: 2025-10-11 04:27:53.107810376 +0000 UTC m=+0.026907678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:53 compute-0 podman[101012]: 2025-10-11 04:27:53.214904968 +0000 UTC m=+0.134002270 container init 981c8694f9c3b446bdc3d00fc84d2dedfbe79d72ea133d021c6db66a1b5e1bc5 (image=quay.io/ceph/ceph:v18, name=optimistic_newton, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:53 compute-0 podman[101012]: 2025-10-11 04:27:53.219632955 +0000 UTC m=+0.138730207 container start 981c8694f9c3b446bdc3d00fc84d2dedfbe79d72ea133d021c6db66a1b5e1bc5 (image=quay.io/ceph/ceph:v18, name=optimistic_newton, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:53 compute-0 podman[101012]: 2025-10-11 04:27:53.222635189 +0000 UTC m=+0.141732491 container attach 981c8694f9c3b446bdc3d00fc84d2dedfbe79d72ea133d021c6db66a1b5e1bc5 (image=quay.io/ceph/ceph:v18, name=optimistic_newton, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:27:53 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct 11 04:27:53 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct 11 04:27:53 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct 11 04:27:53 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct 11 04:27:53 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4062505537' entity='client.rgw.rgw.compute-0.xojlng' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 11 04:27:53 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 38 pg[11.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:53 compute-0 cranky_visvesvaraya[100968]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:27:53 compute-0 cranky_visvesvaraya[100968]: --> relative data size: 1.0
Oct 11 04:27:53 compute-0 cranky_visvesvaraya[100968]: --> All data devices are unavailable
Oct 11 04:27:53 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 11 04:27:53 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3706612396' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 04:27:53 compute-0 optimistic_newton[101027]: 
Oct 11 04:27:53 compute-0 systemd[1]: libpod-66270fa0565ea1e99ee439b9abb84ed4baa06564da268612ffff54475c8d412c.scope: Deactivated successfully.
Oct 11 04:27:53 compute-0 podman[100951]: 2025-10-11 04:27:53.763775951 +0000 UTC m=+1.373372093 container died 66270fa0565ea1e99ee439b9abb84ed4baa06564da268612ffff54475c8d412c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:27:53 compute-0 systemd[1]: libpod-66270fa0565ea1e99ee439b9abb84ed4baa06564da268612ffff54475c8d412c.scope: Consumed 1.096s CPU time.
Oct 11 04:27:53 compute-0 systemd[1]: libpod-981c8694f9c3b446bdc3d00fc84d2dedfbe79d72ea133d021c6db66a1b5e1bc5.scope: Deactivated successfully.
Oct 11 04:27:53 compute-0 conmon[101027]: conmon 981c8694f9c3b446bdc3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-981c8694f9c3b446bdc3d00fc84d2dedfbe79d72ea133d021c6db66a1b5e1bc5.scope/container/memory.events
Oct 11 04:27:53 compute-0 optimistic_newton[101027]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.xojlng","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct 11 04:27:53 compute-0 podman[101012]: 2025-10-11 04:27:53.777116291 +0000 UTC m=+0.696213553 container died 981c8694f9c3b446bdc3d00fc84d2dedfbe79d72ea133d021c6db66a1b5e1bc5 (image=quay.io/ceph/ceph:v18, name=optimistic_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:27:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cfcde513ad363065b5b50e86a58ccc09162e07c906075c01c64a7feae93784b-merged.mount: Deactivated successfully.
Oct 11 04:27:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-5169d4b43e35b87383cf6f4cd977098b9c2094deecb104b19053cd1014e0f50d-merged.mount: Deactivated successfully.
Oct 11 04:27:53 compute-0 podman[100951]: 2025-10-11 04:27:53.832176875 +0000 UTC m=+1.441773017 container remove 66270fa0565ea1e99ee439b9abb84ed4baa06564da268612ffff54475c8d412c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:27:53 compute-0 systemd[1]: libpod-conmon-66270fa0565ea1e99ee439b9abb84ed4baa06564da268612ffff54475c8d412c.scope: Deactivated successfully.
Oct 11 04:27:53 compute-0 podman[101012]: 2025-10-11 04:27:53.839275641 +0000 UTC m=+0.758372893 container remove 981c8694f9c3b446bdc3d00fc84d2dedfbe79d72ea133d021c6db66a1b5e1bc5 (image=quay.io/ceph/ceph:v18, name=optimistic_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:53 compute-0 systemd[1]: libpod-conmon-981c8694f9c3b446bdc3d00fc84d2dedfbe79d72ea133d021c6db66a1b5e1bc5.scope: Deactivated successfully.
Oct 11 04:27:53 compute-0 sudo[101009]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:53 compute-0 sudo[100809]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:53 compute-0 sudo[101098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:53 compute-0 sudo[101098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:53 compute-0 sudo[101098]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:53 compute-0 sudo[101123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:53 compute-0 sudo[101123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:53 compute-0 sudo[101123]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v88: 11 pgs: 1 unknown, 10 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Oct 11 04:27:54 compute-0 sudo[101148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:54 compute-0 sudo[101148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:54 compute-0 sudo[101148]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:54 compute-0 sudo[101173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:27:54 compute-0 sudo[101173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:54 compute-0 podman[101238]: 2025-10-11 04:27:54.556835471 +0000 UTC m=+0.064890408 container create b40fe76bf1a6d47033ad693d2c442f5d98e95419760fbab5185a469bd3e7091b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:27:54 compute-0 systemd[1]: Started libpod-conmon-b40fe76bf1a6d47033ad693d2c442f5d98e95419760fbab5185a469bd3e7091b.scope.
Oct 11 04:27:54 compute-0 podman[101238]: 2025-10-11 04:27:54.529856023 +0000 UTC m=+0.037911010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:54 compute-0 podman[101238]: 2025-10-11 04:27:54.657197127 +0000 UTC m=+0.165252104 container init b40fe76bf1a6d47033ad693d2c442f5d98e95419760fbab5185a469bd3e7091b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kilby, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:27:54 compute-0 podman[101238]: 2025-10-11 04:27:54.672717161 +0000 UTC m=+0.180772078 container start b40fe76bf1a6d47033ad693d2c442f5d98e95419760fbab5185a469bd3e7091b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:54 compute-0 podman[101238]: 2025-10-11 04:27:54.677139481 +0000 UTC m=+0.185194418 container attach b40fe76bf1a6d47033ad693d2c442f5d98e95419760fbab5185a469bd3e7091b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 04:27:54 compute-0 relaxed_kilby[101254]: 167 167
Oct 11 04:27:54 compute-0 systemd[1]: libpod-b40fe76bf1a6d47033ad693d2c442f5d98e95419760fbab5185a469bd3e7091b.scope: Deactivated successfully.
Oct 11 04:27:54 compute-0 podman[101238]: 2025-10-11 04:27:54.679031607 +0000 UTC m=+0.187086584 container died b40fe76bf1a6d47033ad693d2c442f5d98e95419760fbab5185a469bd3e7091b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:27:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa7f50b32d537d45cb95ab03940ac048c589fef29c6545b4e9a5a69345736ec2-merged.mount: Deactivated successfully.
Oct 11 04:27:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct 11 04:27:54 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4062505537' entity='client.rgw.rgw.compute-0.xojlng' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 11 04:27:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct 11 04:27:54 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct 11 04:27:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct 11 04:27:54 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4062505537' entity='client.rgw.rgw.compute-0.xojlng' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 11 04:27:54 compute-0 sudo[101289]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjqmxakxgksruwnlkzjhdhmvaudinnfd ; /usr/bin/python3'
Oct 11 04:27:54 compute-0 podman[101238]: 2025-10-11 04:27:54.733518447 +0000 UTC m=+0.241573364 container remove b40fe76bf1a6d47033ad693d2c442f5d98e95419760fbab5185a469bd3e7091b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kilby, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:27:54 compute-0 ceph-mon[74243]: osdmap e38: 3 total, 3 up, 3 in
Oct 11 04:27:54 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4062505537' entity='client.rgw.rgw.compute-0.xojlng' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 11 04:27:54 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:27:54 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3706612396' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 04:27:54 compute-0 sudo[101289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:54 compute-0 ceph-mon[74243]: pgmap v88: 11 pgs: 1 unknown, 10 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Oct 11 04:27:54 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 39 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:54 compute-0 systemd[1]: libpod-conmon-b40fe76bf1a6d47033ad693d2c442f5d98e95419760fbab5185a469bd3e7091b.scope: Deactivated successfully.
Oct 11 04:27:54 compute-0 ceph-mds[100196]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Oct 11 04:27:54 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mds-cephfs-compute-0-jbpltj[100192]: 2025-10-11T04:27:54.773+0000 7f0fd38c3640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Oct 11 04:27:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:54 compute-0 python3[101294]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:54 compute-0 podman[101303]: 2025-10-11 04:27:54.928699981 +0000 UTC m=+0.054914101 container create 8cf5b3529b2eddee65761488a4de4265e7de3c993891c3ae35f7500fc484c7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:27:54 compute-0 podman[101314]: 2025-10-11 04:27:54.975424608 +0000 UTC m=+0.062250393 container create 616773c3be230ef5894829e7df9ed5a61275957a9331519300a231594ab5a098 (image=quay.io/ceph/ceph:v18, name=ecstatic_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:54 compute-0 systemd[1]: Started libpod-conmon-8cf5b3529b2eddee65761488a4de4265e7de3c993891c3ae35f7500fc484c7f6.scope.
Oct 11 04:27:54 compute-0 podman[101303]: 2025-10-11 04:27:54.901994109 +0000 UTC m=+0.028208299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:55 compute-0 systemd[1]: Started libpod-conmon-616773c3be230ef5894829e7df9ed5a61275957a9331519300a231594ab5a098.scope.
Oct 11 04:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75518944fed8006616be90f642c927990006640537bf9e383e7ff8d91ef17fa8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75518944fed8006616be90f642c927990006640537bf9e383e7ff8d91ef17fa8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75518944fed8006616be90f642c927990006640537bf9e383e7ff8d91ef17fa8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75518944fed8006616be90f642c927990006640537bf9e383e7ff8d91ef17fa8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:55 compute-0 podman[101303]: 2025-10-11 04:27:55.034847719 +0000 UTC m=+0.161061879 container init 8cf5b3529b2eddee65761488a4de4265e7de3c993891c3ae35f7500fc484c7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd1357edd45ae1f2d4464bd5e7bfdd1e23de07ff8e3a1a6e077ef9ec382c3093/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd1357edd45ae1f2d4464bd5e7bfdd1e23de07ff8e3a1a6e077ef9ec382c3093/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:55 compute-0 podman[101314]: 2025-10-11 04:27:54.947438165 +0000 UTC m=+0.034263960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:55 compute-0 podman[101314]: 2025-10-11 04:27:55.048748524 +0000 UTC m=+0.135574309 container init 616773c3be230ef5894829e7df9ed5a61275957a9331519300a231594ab5a098 (image=quay.io/ceph/ceph:v18, name=ecstatic_leavitt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:55 compute-0 podman[101303]: 2025-10-11 04:27:55.053717977 +0000 UTC m=+0.179932097 container start 8cf5b3529b2eddee65761488a4de4265e7de3c993891c3ae35f7500fc484c7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 04:27:55 compute-0 podman[101303]: 2025-10-11 04:27:55.05747715 +0000 UTC m=+0.183691320 container attach 8cf5b3529b2eddee65761488a4de4265e7de3c993891c3ae35f7500fc484c7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:55 compute-0 podman[101314]: 2025-10-11 04:27:55.058675239 +0000 UTC m=+0.145501014 container start 616773c3be230ef5894829e7df9ed5a61275957a9331519300a231594ab5a098 (image=quay.io/ceph/ceph:v18, name=ecstatic_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:27:55 compute-0 podman[101314]: 2025-10-11 04:27:55.062302929 +0000 UTC m=+0.149128724 container attach 616773c3be230ef5894829e7df9ed5a61275957a9331519300a231594ab5a098 (image=quay.io/ceph/ceph:v18, name=ecstatic_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Oct 11 04:27:55 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1298745408' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 11 04:27:55 compute-0 ecstatic_leavitt[101337]: mimic
Oct 11 04:27:55 compute-0 systemd[1]: libpod-616773c3be230ef5894829e7df9ed5a61275957a9331519300a231594ab5a098.scope: Deactivated successfully.
Oct 11 04:27:55 compute-0 podman[101314]: 2025-10-11 04:27:55.604148458 +0000 UTC m=+0.690974243 container died 616773c3be230ef5894829e7df9ed5a61275957a9331519300a231594ab5a098 (image=quay.io/ceph/ceph:v18, name=ecstatic_leavitt, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:27:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd1357edd45ae1f2d4464bd5e7bfdd1e23de07ff8e3a1a6e077ef9ec382c3093-merged.mount: Deactivated successfully.
Oct 11 04:27:55 compute-0 podman[101314]: 2025-10-11 04:27:55.682953 +0000 UTC m=+0.769778785 container remove 616773c3be230ef5894829e7df9ed5a61275957a9331519300a231594ab5a098 (image=quay.io/ceph/ceph:v18, name=ecstatic_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:27:55 compute-0 systemd[1]: libpod-conmon-616773c3be230ef5894829e7df9ed5a61275957a9331519300a231594ab5a098.scope: Deactivated successfully.
Oct 11 04:27:55 compute-0 sudo[101289]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct 11 04:27:55 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4062505537' entity='client.rgw.rgw.compute-0.xojlng' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 11 04:27:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct 11 04:27:55 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct 11 04:27:55 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4062505537' entity='client.rgw.rgw.compute-0.xojlng' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 11 04:27:55 compute-0 ceph-mon[74243]: osdmap e39: 3 total, 3 up, 3 in
Oct 11 04:27:55 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4062505537' entity='client.rgw.rgw.compute-0.xojlng' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 11 04:27:55 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1298745408' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 11 04:27:55 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4062505537' entity='client.rgw.rgw.compute-0.xojlng' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 11 04:27:55 compute-0 ceph-mon[74243]: osdmap e40: 3 total, 3 up, 3 in
Oct 11 04:27:55 compute-0 vigilant_newton[101332]: {
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:     "0": [
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:         {
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "devices": [
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "/dev/loop3"
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             ],
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "lv_name": "ceph_lv0",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "lv_size": "21470642176",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "name": "ceph_lv0",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "tags": {
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.cluster_name": "ceph",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.crush_device_class": "",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.encrypted": "0",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.osd_id": "0",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.type": "block",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.vdo": "0"
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             },
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "type": "block",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "vg_name": "ceph_vg0"
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:         }
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:     ],
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:     "1": [
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:         {
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "devices": [
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "/dev/loop4"
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             ],
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "lv_name": "ceph_lv1",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "lv_size": "21470642176",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "name": "ceph_lv1",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "tags": {
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.cluster_name": "ceph",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.crush_device_class": "",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.encrypted": "0",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.osd_id": "1",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.type": "block",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.vdo": "0"
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             },
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "type": "block",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "vg_name": "ceph_vg1"
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:         }
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:     ],
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:     "2": [
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:         {
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "devices": [
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "/dev/loop5"
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             ],
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "lv_name": "ceph_lv2",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "lv_size": "21470642176",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "name": "ceph_lv2",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "tags": {
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.cluster_name": "ceph",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.crush_device_class": "",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.encrypted": "0",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.osd_id": "2",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.type": "block",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:                 "ceph.vdo": "0"
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             },
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "type": "block",
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:             "vg_name": "ceph_vg2"
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:         }
Oct 11 04:27:55 compute-0 vigilant_newton[101332]:     ]
Oct 11 04:27:55 compute-0 vigilant_newton[101332]: }
Oct 11 04:27:55 compute-0 systemd[1]: libpod-8cf5b3529b2eddee65761488a4de4265e7de3c993891c3ae35f7500fc484c7f6.scope: Deactivated successfully.
Oct 11 04:27:55 compute-0 podman[101303]: 2025-10-11 04:27:55.832906403 +0000 UTC m=+0.959120493 container died 8cf5b3529b2eddee65761488a4de4265e7de3c993891c3ae35f7500fc484c7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:55 compute-0 radosgw[99732]: LDAP not started since no server URIs were provided in the configuration.
Oct 11 04:27:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-75518944fed8006616be90f642c927990006640537bf9e383e7ff8d91ef17fa8-merged.mount: Deactivated successfully.
Oct 11 04:27:55 compute-0 radosgw[99732]: framework: beast
Oct 11 04:27:55 compute-0 radosgw[99732]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct 11 04:27:55 compute-0 radosgw[99732]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct 11 04:27:55 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-rgw-rgw-compute-0-xojlng[99728]: 2025-10-11T04:27:55.865+0000 7f3aa68b9940 -1 LDAP not started since no server URIs were provided in the configuration.
Oct 11 04:27:55 compute-0 radosgw[99732]: starting handler: beast
Oct 11 04:27:55 compute-0 radosgw[99732]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 04:27:55 compute-0 podman[101303]: 2025-10-11 04:27:55.914870982 +0000 UTC m=+1.041085102 container remove 8cf5b3529b2eddee65761488a4de4265e7de3c993891c3ae35f7500fc484c7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 04:27:55 compute-0 systemd[1]: libpod-conmon-8cf5b3529b2eddee65761488a4de4265e7de3c993891c3ae35f7500fc484c7f6.scope: Deactivated successfully.
Oct 11 04:27:55 compute-0 sudo[101173]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:55 compute-0 radosgw[99732]: mgrc service_daemon_register rgw.14273 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.xojlng,kernel_description=#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025,kernel_version=5.14.0-621.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864344,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=1dfe1461-9d72-4321-be73-bb9d1825d662,zone_name=default,zonegroup_id=9da74674-1200-4d17-b71c-d94cba0e9750,zonegroup_name=default}
Oct 11 04:27:56 compute-0 sudo[101932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:56 compute-0 sudo[101932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:56 compute-0 sudo[101932]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:27:56
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Some PGs (0.090909) are unknown; try again later
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v91: 11 pgs: 1 unknown, 10 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:56 compute-0 sudo[101957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:27:56 compute-0 sudo[101957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:56 compute-0 sudo[101957]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 04:27:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 04:27:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:27:56 compute-0 sudo[101982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:56 compute-0 sudo[101982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:56 compute-0 sudo[101982]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:56 compute-0 sudo[102007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:27:56 compute-0 sudo[102007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:56 compute-0 sudo[102082]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onekxwyjeawjxovdvuremjyitkasppiv ; /usr/bin/python3'
Oct 11 04:27:56 compute-0 sudo[102082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:27:56 compute-0 podman[102099]: 2025-10-11 04:27:56.713973162 +0000 UTC m=+0.083787446 container create 9b42b38ba7b31204cc0a65198982df075eb46575b91b527c03fbb990d65b8f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_khayyam, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct 11 04:27:56 compute-0 ceph-mon[74243]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 11 04:27:56 compute-0 ceph-mon[74243]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 11 04:27:56 compute-0 ceph-mon[74243]: pgmap v91: 11 pgs: 1 unknown, 10 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:56 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:27:56 compute-0 systemd[1]: Started libpod-conmon-9b42b38ba7b31204cc0a65198982df075eb46575b91b527c03fbb990d65b8f95.scope.
Oct 11 04:27:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:27:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct 11 04:27:56 compute-0 python3[102090]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:56 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct 11 04:27:56 compute-0 ceph-mgr[74542]: [progress INFO root] update: starting ev 838b0b51-2289-4e9c-a222-663af58be503 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 11 04:27:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 04:27:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:27:56 compute-0 podman[102099]: 2025-10-11 04:27:56.676078874 +0000 UTC m=+0.045893168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:56 compute-0 podman[102099]: 2025-10-11 04:27:56.808996046 +0000 UTC m=+0.178810310 container init 9b42b38ba7b31204cc0a65198982df075eb46575b91b527c03fbb990d65b8f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:27:56 compute-0 podman[102099]: 2025-10-11 04:27:56.818293816 +0000 UTC m=+0.188108070 container start 9b42b38ba7b31204cc0a65198982df075eb46575b91b527c03fbb990d65b8f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_khayyam, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:56 compute-0 epic_khayyam[102115]: 167 167
Oct 11 04:27:56 compute-0 systemd[1]: libpod-9b42b38ba7b31204cc0a65198982df075eb46575b91b527c03fbb990d65b8f95.scope: Deactivated successfully.
Oct 11 04:27:56 compute-0 podman[102099]: 2025-10-11 04:27:56.825516165 +0000 UTC m=+0.195330409 container attach 9b42b38ba7b31204cc0a65198982df075eb46575b91b527c03fbb990d65b8f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_khayyam, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:56 compute-0 podman[102099]: 2025-10-11 04:27:56.825813872 +0000 UTC m=+0.195628126 container died 9b42b38ba7b31204cc0a65198982df075eb46575b91b527c03fbb990d65b8f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:27:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-ede1c96af1565d2424d60532a6701171a0fc0b60d38f352fbd203742d7435881-merged.mount: Deactivated successfully.
Oct 11 04:27:56 compute-0 podman[102099]: 2025-10-11 04:27:56.871403261 +0000 UTC m=+0.241217535 container remove 9b42b38ba7b31204cc0a65198982df075eb46575b91b527c03fbb990d65b8f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:27:56 compute-0 systemd[1]: libpod-conmon-9b42b38ba7b31204cc0a65198982df075eb46575b91b527c03fbb990d65b8f95.scope: Deactivated successfully.
Oct 11 04:27:56 compute-0 podman[102117]: 2025-10-11 04:27:56.890838233 +0000 UTC m=+0.104413027 container create 874a01a7fce6b270a1a75a4aa5639b928c57e2964f0661af4eb2cfb12dadc2b0 (image=quay.io/ceph/ceph:v18, name=hopeful_bhabha, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:56 compute-0 podman[102117]: 2025-10-11 04:27:56.821729031 +0000 UTC m=+0.035303835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:27:56 compute-0 systemd[1]: Started libpod-conmon-874a01a7fce6b270a1a75a4aa5639b928c57e2964f0661af4eb2cfb12dadc2b0.scope.
Oct 11 04:27:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd00f2a5735d07ca28831de46ce45fedfee8fb83b51e129f3d032b983fde3226/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd00f2a5735d07ca28831de46ce45fedfee8fb83b51e129f3d032b983fde3226/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:56 compute-0 podman[102117]: 2025-10-11 04:27:56.962378824 +0000 UTC m=+0.175953628 container init 874a01a7fce6b270a1a75a4aa5639b928c57e2964f0661af4eb2cfb12dadc2b0 (image=quay.io/ceph/ceph:v18, name=hopeful_bhabha, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:27:56 compute-0 podman[102117]: 2025-10-11 04:27:56.968860425 +0000 UTC m=+0.182435209 container start 874a01a7fce6b270a1a75a4aa5639b928c57e2964f0661af4eb2cfb12dadc2b0 (image=quay.io/ceph/ceph:v18, name=hopeful_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:27:56 compute-0 podman[102117]: 2025-10-11 04:27:56.973375397 +0000 UTC m=+0.186950181 container attach 874a01a7fce6b270a1a75a4aa5639b928c57e2964f0661af4eb2cfb12dadc2b0 (image=quay.io/ceph/ceph:v18, name=hopeful_bhabha, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:27:57 compute-0 podman[102159]: 2025-10-11 04:27:57.044851857 +0000 UTC m=+0.048424680 container create 4d109c5d6a3e4a211e8bfbceca5566849a79cbd352c2a8a9a4497093e2f93b9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendel, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:57 compute-0 systemd[1]: Started libpod-conmon-4d109c5d6a3e4a211e8bfbceca5566849a79cbd352c2a8a9a4497093e2f93b9b.scope.
Oct 11 04:27:57 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7735a29ca462fe44d52df6754320c851abd64af248710573a9732a8d954245cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7735a29ca462fe44d52df6754320c851abd64af248710573a9732a8d954245cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7735a29ca462fe44d52df6754320c851abd64af248710573a9732a8d954245cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7735a29ca462fe44d52df6754320c851abd64af248710573a9732a8d954245cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:57 compute-0 podman[102159]: 2025-10-11 04:27:57.025837336 +0000 UTC m=+0.029410189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:57 compute-0 podman[102159]: 2025-10-11 04:27:57.121934596 +0000 UTC m=+0.125507499 container init 4d109c5d6a3e4a211e8bfbceca5566849a79cbd352c2a8a9a4497093e2f93b9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:27:57 compute-0 podman[102159]: 2025-10-11 04:27:57.127483193 +0000 UTC m=+0.131056006 container start 4d109c5d6a3e4a211e8bfbceca5566849a79cbd352c2a8a9a4497093e2f93b9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendel, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:27:57 compute-0 podman[102159]: 2025-10-11 04:27:57.130961199 +0000 UTC m=+0.134534062 container attach 4d109c5d6a3e4a211e8bfbceca5566849a79cbd352c2a8a9a4497093e2f93b9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 04:27:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Oct 11 04:27:57 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2628570052' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 11 04:27:57 compute-0 hopeful_bhabha[102150]: 
Oct 11 04:27:57 compute-0 systemd[1]: libpod-874a01a7fce6b270a1a75a4aa5639b928c57e2964f0661af4eb2cfb12dadc2b0.scope: Deactivated successfully.
Oct 11 04:27:57 compute-0 hopeful_bhabha[102150]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":7}}
Oct 11 04:27:57 compute-0 podman[102117]: 2025-10-11 04:27:57.551050563 +0000 UTC m=+0.764625407 container died 874a01a7fce6b270a1a75a4aa5639b928c57e2964f0661af4eb2cfb12dadc2b0 (image=quay.io/ceph/ceph:v18, name=hopeful_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd00f2a5735d07ca28831de46ce45fedfee8fb83b51e129f3d032b983fde3226-merged.mount: Deactivated successfully.
Oct 11 04:27:57 compute-0 podman[102117]: 2025-10-11 04:27:57.611164202 +0000 UTC m=+0.824739026 container remove 874a01a7fce6b270a1a75a4aa5639b928c57e2964f0661af4eb2cfb12dadc2b0 (image=quay.io/ceph/ceph:v18, name=hopeful_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:27:57 compute-0 systemd[1]: libpod-conmon-874a01a7fce6b270a1a75a4aa5639b928c57e2964f0661af4eb2cfb12dadc2b0.scope: Deactivated successfully.
Oct 11 04:27:57 compute-0 sudo[102082]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:57 compute-0 ceph-mon[74243]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 11 04:27:57 compute-0 ceph-mon[74243]: Cluster is now healthy
Oct 11 04:27:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:27:57 compute-0 ceph-mon[74243]: osdmap e41: 3 total, 3 up, 3 in
Oct 11 04:27:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:27:57 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2628570052' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 11 04:27:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct 11 04:27:57 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:27:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct 11 04:27:57 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct 11 04:27:57 compute-0 ceph-mgr[74542]: [progress INFO root] update: starting ev 1f9bdf04-28df-4ef8-9252-02cca2858436 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 11 04:27:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 04:27:57 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:27:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v94: 11 pgs: 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 8.5 KiB/s wr, 277 op/s
Oct 11 04:27:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 04:27:58 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:27:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 04:27:58 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]: {
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:         "osd_id": 1,
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:         "type": "bluestore"
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:     },
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:         "osd_id": 0,
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:         "type": "bluestore"
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:     },
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:         "osd_id": 2,
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:         "type": "bluestore"
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]:     }
Oct 11 04:27:58 compute-0 wonderful_mendel[102176]: }
Oct 11 04:27:58 compute-0 systemd[1]: libpod-4d109c5d6a3e4a211e8bfbceca5566849a79cbd352c2a8a9a4497093e2f93b9b.scope: Deactivated successfully.
Oct 11 04:27:58 compute-0 systemd[1]: libpod-4d109c5d6a3e4a211e8bfbceca5566849a79cbd352c2a8a9a4497093e2f93b9b.scope: Consumed 1.073s CPU time.
Oct 11 04:27:58 compute-0 podman[102159]: 2025-10-11 04:27:58.198170689 +0000 UTC m=+1.201743532 container died 4d109c5d6a3e4a211e8bfbceca5566849a79cbd352c2a8a9a4497093e2f93b9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 04:27:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-7735a29ca462fe44d52df6754320c851abd64af248710573a9732a8d954245cb-merged.mount: Deactivated successfully.
Oct 11 04:27:58 compute-0 podman[102159]: 2025-10-11 04:27:58.272661824 +0000 UTC m=+1.276234667 container remove 4d109c5d6a3e4a211e8bfbceca5566849a79cbd352c2a8a9a4497093e2f93b9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendel, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 04:27:58 compute-0 systemd[1]: libpod-conmon-4d109c5d6a3e4a211e8bfbceca5566849a79cbd352c2a8a9a4497093e2f93b9b.scope: Deactivated successfully.
Oct 11 04:27:58 compute-0 sudo[102007]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:27:58 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:27:58 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:58 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev f09c1a00-2ddf-43ee-828a-81289dfce884 does not exist
Oct 11 04:27:58 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 30100c03-19ad-4dba-9add-c3db6738a926 does not exist
Oct 11 04:27:58 compute-0 sudo[102255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:58 compute-0 sudo[102255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:58 compute-0 sudo[102255]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:58 compute-0 sudo[102280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:27:58 compute-0 sudo[102280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:58 compute-0 sudo[102280]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:58 compute-0 sudo[102305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:58 compute-0 sudo[102305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:58 compute-0 sudo[102305]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:58 compute-0 sudo[102330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:58 compute-0 sudo[102330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:58 compute-0 sudo[102330]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:58 compute-0 sudo[102355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:58 compute-0 sudo[102355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:58 compute-0 sudo[102355]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct 11 04:27:58 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:27:58 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:27:58 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:27:58 compute-0 sudo[102380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 04:27:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct 11 04:27:58 compute-0 sudo[102380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:58 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct 11 04:27:58 compute-0 ceph-mgr[74542]: [progress INFO root] update: starting ev 377194cd-55b9-4216-b805-bf664c3641e5 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 11 04:27:58 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 43 pg[2.0( empty local-lis/les=18/19 n=0 ec=13/13 lis/c=18/18 les/c/f=19/19/0 sis=43 pruub=13.427091599s) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active pruub 70.901100159s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:27:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 04:27:58 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:27:58 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 43 pg[2.0( empty local-lis/les=18/19 n=0 ec=13/13 lis/c=18/18 les/c/f=19/19/0 sis=43 pruub=13.427091599s) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown pruub 70.901100159s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:27:58 compute-0 ceph-mon[74243]: osdmap e42: 3 total, 3 up, 3 in
Oct 11 04:27:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:27:58 compute-0 ceph-mon[74243]: pgmap v94: 11 pgs: 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 8.5 KiB/s wr, 277 op/s
Oct 11 04:27:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:27:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:27:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:27:59 compute-0 podman[102475]: 2025-10-11 04:27:59.422982551 +0000 UTC m=+0.072882916 container exec 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:59 compute-0 podman[102475]: 2025-10-11 04:27:59.548096739 +0000 UTC m=+0.197997054 container exec_died 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:27:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct 11 04:27:59 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:27:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct 11 04:27:59 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct 11 04:27:59 compute-0 ceph-mgr[74542]: [progress INFO root] update: starting ev b41b7061-0237-401e-a7b8-362afd30d781 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 11 04:27:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Oct 11 04:27:59 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.1e( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.1( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.c( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.10( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.e( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.12( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.14( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.1a( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:27:59 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:27:59 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:27:59 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:27:59 compute-0 ceph-mon[74243]: osdmap e43: 3 total, 3 up, 3 in
Oct 11 04:27:59 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:27:59 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:27:59 compute-0 ceph-mon[74243]: osdmap e44: 3 total, 3 up, 3 in
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.1e( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.0( empty local-lis/les=43/44 n=0 ec=13/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.1( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.e( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.14( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.c( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.1a( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.10( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:27:59 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 44 pg[2.12( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v97: 73 pgs: 62 unknown, 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 8.5 KiB/s wr, 277 op/s
Oct 11 04:28:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 04:28:00 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 04:28:00 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 43 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=43 pruub=8.794586182s) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active pruub 73.217651367s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=43 pruub=8.794586182s) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown pruub 73.217651367s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.2( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.4( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.b( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.d( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.10( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.13( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.14( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.15( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.19( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.1c( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.1a( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:00 compute-0 sudo[102380]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:28:00 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:28:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:28:00 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:28:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:28:00 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:28:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:28:00 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:28:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:28:00 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:28:00 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 8a275921-5e8f-4e10-830c-756f211156be does not exist
Oct 11 04:28:00 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 2866e2dd-803e-499a-9eb0-66f26a383813 does not exist
Oct 11 04:28:00 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev f5b7b644-ff7c-4309-b975-040114960465 does not exist
Oct 11 04:28:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:28:00 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:28:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:28:00 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:28:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:28:00 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:28:00 compute-0 sudo[102636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:28:00 compute-0 sudo[102636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:00 compute-0 sudo[102636]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:00 compute-0 sudo[102661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:28:00 compute-0 sudo[102661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:00 compute-0 sudo[102661]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:00 compute-0 sudo[102686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:28:00 compute-0 sudo[102686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:00 compute-0 sudo[102686]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct 11 04:28:00 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct 11 04:28:00 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:28:00 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:28:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct 11 04:28:00 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct 11 04:28:00 compute-0 ceph-mgr[74542]: [progress INFO root] update: starting ev 557ae095-44ab-4700-8118-ccc431805fa3 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct 11 04:28:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 04:28:00 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct 11 04:28:00 compute-0 ceph-mon[74243]: pgmap v97: 73 pgs: 62 unknown, 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 8.5 KiB/s wr, 277 op/s
Oct 11 04:28:00 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:00 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:00 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:28:00 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:28:00 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:28:00 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:28:00 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:28:00 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:28:00 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:28:00 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:28:00 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct 11 04:28:00 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:28:00 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:28:00 compute-0 ceph-mon[74243]: osdmap e45: 3 total, 3 up, 3 in
Oct 11 04:28:00 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.1c( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.19( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.1a( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.4( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.b( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.0( empty local-lis/les=43/45 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.d( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.10( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.13( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.14( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 45 pg[3.2( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:00 compute-0 sudo[102711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:28:00 compute-0 sudo[102711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 45 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45 pruub=12.156976700s) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active pruub 71.887649536s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 45 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45 pruub=12.156976700s) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown pruub 71.887649536s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 45 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=45 pruub=10.077462196s) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active pruub 80.305358887s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 45 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=45 pruub=10.077462196s) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown pruub 80.305358887s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-mgr[74542]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Oct 11 04:28:01 compute-0 podman[102776]: 2025-10-11 04:28:01.303947494 +0000 UTC m=+0.042772461 container create ae9b1ca1ee12d296c44a4bf5aaf12084e85cd97c09a7d5aca75c93c92a1d37b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_murdock, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:28:01 compute-0 systemd[1]: Started libpod-conmon-ae9b1ca1ee12d296c44a4bf5aaf12084e85cd97c09a7d5aca75c93c92a1d37b9.scope.
Oct 11 04:28:01 compute-0 podman[102776]: 2025-10-11 04:28:01.28562844 +0000 UTC m=+0.024453397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:28:01 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:28:01 compute-0 podman[102776]: 2025-10-11 04:28:01.413442665 +0000 UTC m=+0.152267662 container init ae9b1ca1ee12d296c44a4bf5aaf12084e85cd97c09a7d5aca75c93c92a1d37b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_murdock, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:28:01 compute-0 podman[102776]: 2025-10-11 04:28:01.42372209 +0000 UTC m=+0.162547057 container start ae9b1ca1ee12d296c44a4bf5aaf12084e85cd97c09a7d5aca75c93c92a1d37b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_murdock, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:28:01 compute-0 podman[102776]: 2025-10-11 04:28:01.428233832 +0000 UTC m=+0.167058789 container attach ae9b1ca1ee12d296c44a4bf5aaf12084e85cd97c09a7d5aca75c93c92a1d37b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_murdock, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:28:01 compute-0 stoic_murdock[102793]: 167 167
Oct 11 04:28:01 compute-0 systemd[1]: libpod-ae9b1ca1ee12d296c44a4bf5aaf12084e85cd97c09a7d5aca75c93c92a1d37b9.scope: Deactivated successfully.
Oct 11 04:28:01 compute-0 podman[102776]: 2025-10-11 04:28:01.431171134 +0000 UTC m=+0.169996091 container died ae9b1ca1ee12d296c44a4bf5aaf12084e85cd97c09a7d5aca75c93c92a1d37b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_murdock, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:28:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-69b6c46cc7d4fde7fa02aeca2680f1a9a516ecdfb15bb4b92efe41d80d9aac9f-merged.mount: Deactivated successfully.
Oct 11 04:28:01 compute-0 podman[102776]: 2025-10-11 04:28:01.48470708 +0000 UTC m=+0.223532047 container remove ae9b1ca1ee12d296c44a4bf5aaf12084e85cd97c09a7d5aca75c93c92a1d37b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_murdock, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:28:01 compute-0 systemd[1]: libpod-conmon-ae9b1ca1ee12d296c44a4bf5aaf12084e85cd97c09a7d5aca75c93c92a1d37b9.scope: Deactivated successfully.
Oct 11 04:28:01 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct 11 04:28:01 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct 11 04:28:01 compute-0 podman[102817]: 2025-10-11 04:28:01.710060041 +0000 UTC m=+0.057301860 container create 70102bc2ba5e1dad3086dd8521e291c03a4978c9a72237023ac9352010259cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:28:01 compute-0 systemd[1]: Started libpod-conmon-70102bc2ba5e1dad3086dd8521e291c03a4978c9a72237023ac9352010259cb1.scope.
Oct 11 04:28:01 compute-0 podman[102817]: 2025-10-11 04:28:01.684437257 +0000 UTC m=+0.031679116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:28:01 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:28:01 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct 11 04:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe562329bb4fc13c9bb8334f95997d52c6ac7faa3a3c7f777e9c388a71b39ea6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe562329bb4fc13c9bb8334f95997d52c6ac7faa3a3c7f777e9c388a71b39ea6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe562329bb4fc13c9bb8334f95997d52c6ac7faa3a3c7f777e9c388a71b39ea6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe562329bb4fc13c9bb8334f95997d52c6ac7faa3a3c7f777e9c388a71b39ea6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe562329bb4fc13c9bb8334f95997d52c6ac7faa3a3c7f777e9c388a71b39ea6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:01 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:28:01 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct 11 04:28:01 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct 11 04:28:01 compute-0 podman[102817]: 2025-10-11 04:28:01.812830586 +0000 UTC m=+0.160072405 container init 70102bc2ba5e1dad3086dd8521e291c03a4978c9a72237023ac9352010259cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tharp, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:28:01 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Oct 11 04:28:01 compute-0 ceph-mgr[74542]: [progress INFO root] update: starting ev 20c5ff25-907e-4245-b8f4-d18d53a47d97 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 11 04:28:01 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 04:28:01 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.1e( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.1f( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.1e( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.1f( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.1d( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.8( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.12( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.7( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.b( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.a( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.6( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.5( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.10( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.14( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.15( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.17( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.8( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.9( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.1a( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.1b( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.a( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.b( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.4( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.19( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.7( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.6( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.3( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.5( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.4( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.3( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.2( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.e( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.1( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.c( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.d( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.e( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.1c( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.d( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.1b( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.f( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.10( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.12( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.13( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.11( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.15( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.17( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.18( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.16( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:01 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Oct 11 04:28:01 compute-0 podman[102817]: 2025-10-11 04:28:01.834718998 +0000 UTC m=+0.181960787 container start 70102bc2ba5e1dad3086dd8521e291c03a4978c9a72237023ac9352010259cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tharp, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.1f( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.1d( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.1f( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 podman[102817]: 2025-10-11 04:28:01.842560162 +0000 UTC m=+0.189802041 container attach 70102bc2ba5e1dad3086dd8521e291c03a4978c9a72237023ac9352010259cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.8( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.a( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.b( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.0( empty local-lis/les=45/46 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.17( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.6( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.e( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.1b( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.1c( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.d( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 46 pg[5.10( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.8( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.7( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.1e( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.b( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.a( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.1a( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.9( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.4( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.3( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.5( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.1b( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.19( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.1( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.0( empty local-lis/les=45/46 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.d( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.e( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.f( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.10( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.c( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.11( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.6( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.12( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.16( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.15( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.13( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.17( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:01 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 46 pg[4.18( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v100: 135 pgs: 1 peering, 93 unknown, 41 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 0 B/s wr, 118 op/s
Oct 11 04:28:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 04:28:02 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Oct 11 04:28:02 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct 11 04:28:02 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Oct 11 04:28:02 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Oct 11 04:28:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct 11 04:28:02 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:28:02 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:28:02 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct 11 04:28:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct 11 04:28:02 compute-0 ceph-mon[74243]: 2.1 scrub starts
Oct 11 04:28:02 compute-0 ceph-mon[74243]: 2.1 scrub ok
Oct 11 04:28:02 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:28:02 compute-0 ceph-mon[74243]: osdmap e46: 3 total, 3 up, 3 in
Oct 11 04:28:02 compute-0 ceph-mon[74243]: 3.1 scrub starts
Oct 11 04:28:02 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:28:02 compute-0 ceph-mon[74243]: 3.1 scrub ok
Oct 11 04:28:02 compute-0 ceph-mon[74243]: pgmap v100: 135 pgs: 1 peering, 93 unknown, 41 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 0 B/s wr, 118 op/s
Oct 11 04:28:02 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:02 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct 11 04:28:02 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 47 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47 pruub=14.449489594s) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active pruub 81.365669250s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:02 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 47 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47 pruub=14.449489594s) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown pruub 81.365669250s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:02 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct 11 04:28:02 compute-0 ceph-mgr[74542]: [progress INFO root] update: starting ev 47f7ce2c-dc7b-4d4a-9bc4-b50a51fdcaf5 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 11 04:28:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 04:28:02 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:28:02 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.2 deep-scrub starts
Oct 11 04:28:02 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.2 deep-scrub ok
Oct 11 04:28:02 compute-0 romantic_tharp[102834]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:28:02 compute-0 romantic_tharp[102834]: --> relative data size: 1.0
Oct 11 04:28:02 compute-0 romantic_tharp[102834]: --> All data devices are unavailable
Oct 11 04:28:02 compute-0 systemd[1]: libpod-70102bc2ba5e1dad3086dd8521e291c03a4978c9a72237023ac9352010259cb1.scope: Deactivated successfully.
Oct 11 04:28:02 compute-0 podman[102817]: 2025-10-11 04:28:02.97555349 +0000 UTC m=+1.322795319 container died 70102bc2ba5e1dad3086dd8521e291c03a4978c9a72237023ac9352010259cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:28:02 compute-0 systemd[1]: libpod-70102bc2ba5e1dad3086dd8521e291c03a4978c9a72237023ac9352010259cb1.scope: Consumed 1.084s CPU time.
Oct 11 04:28:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe562329bb4fc13c9bb8334f95997d52c6ac7faa3a3c7f777e9c388a71b39ea6-merged.mount: Deactivated successfully.
Oct 11 04:28:03 compute-0 systemd[75878]: Starting Mark boot as successful...
Oct 11 04:28:03 compute-0 systemd[75878]: Finished Mark boot as successful.
Oct 11 04:28:03 compute-0 podman[102817]: 2025-10-11 04:28:03.041225527 +0000 UTC m=+1.388467316 container remove 70102bc2ba5e1dad3086dd8521e291c03a4978c9a72237023ac9352010259cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:28:03 compute-0 systemd[1]: libpod-conmon-70102bc2ba5e1dad3086dd8521e291c03a4978c9a72237023ac9352010259cb1.scope: Deactivated successfully.
Oct 11 04:28:03 compute-0 sudo[102711]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 47 pg[6.0( v 38'39 (0'0,38'39] local-lis/les=21/22 n=22 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=47 pruub=12.107559204s) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 lcod 34'38 mlcod 34'38 active pruub 84.385650635s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 47 pg[6.0( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=47 pruub=12.107559204s) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 lcod 34'38 mlcod 0'0 unknown pruub 84.385650635s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 sudo[102876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:28:03 compute-0 sudo[102876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:03 compute-0 sudo[102876]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:03 compute-0 sudo[102901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:28:03 compute-0 sudo[102901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:03 compute-0 sudo[102901]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:03 compute-0 sudo[102926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:28:03 compute-0 sudo[102926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:03 compute-0 sudo[102926]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:03 compute-0 sudo[102951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:28:03 compute-0 sudo[102951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:03 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Oct 11 04:28:03 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Oct 11 04:28:03 compute-0 podman[103017]: 2025-10-11 04:28:03.627776452 +0000 UTC m=+0.052045690 container create 48e17f5762fe59953382aea1fcad303b0fe7804810d7b1203ae4a782348e5c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:28:03 compute-0 systemd[1]: Started libpod-conmon-48e17f5762fe59953382aea1fcad303b0fe7804810d7b1203ae4a782348e5c88.scope.
Oct 11 04:28:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:28:03 compute-0 podman[103017]: 2025-10-11 04:28:03.601989224 +0000 UTC m=+0.026258452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:28:03 compute-0 podman[103017]: 2025-10-11 04:28:03.714284225 +0000 UTC m=+0.138553453 container init 48e17f5762fe59953382aea1fcad303b0fe7804810d7b1203ae4a782348e5c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shockley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:28:03 compute-0 podman[103017]: 2025-10-11 04:28:03.722721274 +0000 UTC m=+0.146990482 container start 48e17f5762fe59953382aea1fcad303b0fe7804810d7b1203ae4a782348e5c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shockley, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 04:28:03 compute-0 podman[103017]: 2025-10-11 04:28:03.726014415 +0000 UTC m=+0.150283633 container attach 48e17f5762fe59953382aea1fcad303b0fe7804810d7b1203ae4a782348e5c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:28:03 compute-0 nifty_shockley[103033]: 167 167
Oct 11 04:28:03 compute-0 systemd[1]: libpod-48e17f5762fe59953382aea1fcad303b0fe7804810d7b1203ae4a782348e5c88.scope: Deactivated successfully.
Oct 11 04:28:03 compute-0 conmon[103033]: conmon 48e17f5762fe59953382 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-48e17f5762fe59953382aea1fcad303b0fe7804810d7b1203ae4a782348e5c88.scope/container/memory.events
Oct 11 04:28:03 compute-0 podman[103017]: 2025-10-11 04:28:03.730445845 +0000 UTC m=+0.154715103 container died 48e17f5762fe59953382aea1fcad303b0fe7804810d7b1203ae4a782348e5c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:28:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-181a27204c7b026ca0e5073eb0fc39387ae9679cffe1d4fd029d34e49c6a2247-merged.mount: Deactivated successfully.
Oct 11 04:28:03 compute-0 podman[103017]: 2025-10-11 04:28:03.770869806 +0000 UTC m=+0.195139054 container remove 48e17f5762fe59953382aea1fcad303b0fe7804810d7b1203ae4a782348e5c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:28:03 compute-0 systemd[1]: libpod-conmon-48e17f5762fe59953382aea1fcad303b0fe7804810d7b1203ae4a782348e5c88.scope: Deactivated successfully.
Oct 11 04:28:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct 11 04:28:03 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:28:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct 11 04:28:03 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.12( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.11( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.13( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.10( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.17( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.16( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.15( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.b( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.a( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.8( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.9( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.6( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.d( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.4( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.f( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.e( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.c( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.5( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.7( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.1( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.14( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.2( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.3( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.1c( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.1d( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.1e( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.1f( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.19( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.18( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.1b( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.1a( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-mgr[74542]: [progress INFO root] update: starting ev 9a10f351-a9b0-4a4c-bde4-a80bd0dd6cc0 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.a( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.5( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.9( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.4( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.8( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.7( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.6( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=21/22 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.3( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.e( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.2( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.f( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.c( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.d( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:03 compute-0 ceph-mon[74243]: 2.2 scrub starts
Oct 11 04:28:03 compute-0 ceph-mon[74243]: 2.2 scrub ok
Oct 11 04:28:03 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:28:03 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:28:03 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct 11 04:28:03 compute-0 ceph-mon[74243]: osdmap e47: 3 total, 3 up, 3 in
Oct 11 04:28:03 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:28:03 compute-0 ceph-mon[74243]: 3.2 deep-scrub starts
Oct 11 04:28:03 compute-0 ceph-mon[74243]: 3.2 deep-scrub ok
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.17( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.10( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.16( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.12( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.4( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.6( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.0( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 lcod 34'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.2( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.c( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 48 pg[6.e( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.0( empty local-lis/les=47/48 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.d( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.14( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.1d( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.7( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.1e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.19( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 48 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 04:28:03 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:28:03 compute-0 podman[103057]: 2025-10-11 04:28:03.955419076 +0000 UTC m=+0.057703290 container create e06ff80d8e4dcad86e45c18a9821f2dbd9a28d5f2995fde1dae111be7784a24d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:28:04 compute-0 systemd[1]: Started libpod-conmon-e06ff80d8e4dcad86e45c18a9821f2dbd9a28d5f2995fde1dae111be7784a24d.scope.
Oct 11 04:28:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33750d704e61840c7764325374ecaaabdc5f12a1cd3f1c7dc6086e4b55ca1515/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33750d704e61840c7764325374ecaaabdc5f12a1cd3f1c7dc6086e4b55ca1515/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33750d704e61840c7764325374ecaaabdc5f12a1cd3f1c7dc6086e4b55ca1515/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33750d704e61840c7764325374ecaaabdc5f12a1cd3f1c7dc6086e4b55ca1515/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:04 compute-0 podman[103057]: 2025-10-11 04:28:03.940058266 +0000 UTC m=+0.042342500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:28:04 compute-0 podman[103057]: 2025-10-11 04:28:04.045093197 +0000 UTC m=+0.147377451 container init e06ff80d8e4dcad86e45c18a9821f2dbd9a28d5f2995fde1dae111be7784a24d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kepler, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:28:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v103: 181 pgs: 1 peering, 77 unknown, 103 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 0 B/s wr, 118 op/s
Oct 11 04:28:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 04:28:04 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 04:28:04 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:04 compute-0 podman[103057]: 2025-10-11 04:28:04.053891145 +0000 UTC m=+0.156175349 container start e06ff80d8e4dcad86e45c18a9821f2dbd9a28d5f2995fde1dae111be7784a24d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kepler, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:28:04 compute-0 podman[103057]: 2025-10-11 04:28:04.056836438 +0000 UTC m=+0.159120682 container attach e06ff80d8e4dcad86e45c18a9821f2dbd9a28d5f2995fde1dae111be7784a24d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kepler, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:28:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:04 compute-0 keen_kepler[103074]: {
Oct 11 04:28:04 compute-0 keen_kepler[103074]:     "0": [
Oct 11 04:28:04 compute-0 keen_kepler[103074]:         {
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "devices": [
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "/dev/loop3"
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             ],
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "lv_name": "ceph_lv0",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "lv_size": "21470642176",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "name": "ceph_lv0",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "tags": {
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.cluster_name": "ceph",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.crush_device_class": "",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.encrypted": "0",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.osd_id": "0",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.type": "block",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.vdo": "0"
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             },
Oct 11 04:28:04 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.1 deep-scrub starts
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "type": "block",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "vg_name": "ceph_vg0"
Oct 11 04:28:04 compute-0 keen_kepler[103074]:         }
Oct 11 04:28:04 compute-0 keen_kepler[103074]:     ],
Oct 11 04:28:04 compute-0 keen_kepler[103074]:     "1": [
Oct 11 04:28:04 compute-0 keen_kepler[103074]:         {
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "devices": [
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "/dev/loop4"
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             ],
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "lv_name": "ceph_lv1",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "lv_size": "21470642176",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "name": "ceph_lv1",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "tags": {
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.cluster_name": "ceph",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.crush_device_class": "",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.encrypted": "0",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.osd_id": "1",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.type": "block",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.vdo": "0"
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             },
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "type": "block",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "vg_name": "ceph_vg1"
Oct 11 04:28:04 compute-0 keen_kepler[103074]:         }
Oct 11 04:28:04 compute-0 keen_kepler[103074]:     ],
Oct 11 04:28:04 compute-0 keen_kepler[103074]:     "2": [
Oct 11 04:28:04 compute-0 keen_kepler[103074]:         {
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "devices": [
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "/dev/loop5"
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             ],
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "lv_name": "ceph_lv2",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "lv_size": "21470642176",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "name": "ceph_lv2",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "tags": {
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.cluster_name": "ceph",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.crush_device_class": "",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.encrypted": "0",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.osd_id": "2",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.type": "block",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:                 "ceph.vdo": "0"
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             },
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "type": "block",
Oct 11 04:28:04 compute-0 keen_kepler[103074]:             "vg_name": "ceph_vg2"
Oct 11 04:28:04 compute-0 keen_kepler[103074]:         }
Oct 11 04:28:04 compute-0 keen_kepler[103074]:     ]
Oct 11 04:28:04 compute-0 keen_kepler[103074]: }
Oct 11 04:28:04 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.1 deep-scrub ok
Oct 11 04:28:04 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Oct 11 04:28:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct 11 04:28:04 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Oct 11 04:28:04 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:28:04 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:28:04 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:28:04 compute-0 systemd[1]: libpod-e06ff80d8e4dcad86e45c18a9821f2dbd9a28d5f2995fde1dae111be7784a24d.scope: Deactivated successfully.
Oct 11 04:28:04 compute-0 podman[103057]: 2025-10-11 04:28:04.842680279 +0000 UTC m=+0.944964493 container died e06ff80d8e4dcad86e45c18a9821f2dbd9a28d5f2995fde1dae111be7784a24d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kepler, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:28:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct 11 04:28:04 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct 11 04:28:04 compute-0 ceph-mgr[74542]: [progress INFO root] update: starting ev 0dfa6eaa-dc06-4852-abee-2d35df6ff188 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 11 04:28:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 04:28:04 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:28:04 compute-0 ceph-mon[74243]: 2.3 scrub starts
Oct 11 04:28:04 compute-0 ceph-mon[74243]: 2.3 scrub ok
Oct 11 04:28:04 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:28:04 compute-0 ceph-mon[74243]: osdmap e48: 3 total, 3 up, 3 in
Oct 11 04:28:04 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:28:04 compute-0 ceph-mon[74243]: pgmap v103: 181 pgs: 1 peering, 77 unknown, 103 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 0 B/s wr, 118 op/s
Oct 11 04:28:04 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:04 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-33750d704e61840c7764325374ecaaabdc5f12a1cd3f1c7dc6086e4b55ca1515-merged.mount: Deactivated successfully.
Oct 11 04:28:04 compute-0 podman[103057]: 2025-10-11 04:28:04.89923575 +0000 UTC m=+1.001519964 container remove e06ff80d8e4dcad86e45c18a9821f2dbd9a28d5f2995fde1dae111be7784a24d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kepler, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:28:04 compute-0 systemd[1]: libpod-conmon-e06ff80d8e4dcad86e45c18a9821f2dbd9a28d5f2995fde1dae111be7784a24d.scope: Deactivated successfully.
Oct 11 04:28:04 compute-0 sudo[102951]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:04 compute-0 sudo[103097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:28:04 compute-0 sudo[103097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:04 compute-0 sudo[103097]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:05 compute-0 sudo[103122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:28:05 compute-0 sudo[103122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:05 compute-0 sudo[103122]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:05 compute-0 sudo[103147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:28:05 compute-0 sudo[103147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:05 compute-0 sudo[103147]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:05 compute-0 sudo[103172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:28:05 compute-0 sudo[103172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 49 pg[8.0( v 33'4 (0'0,33'4] local-lis/les=32/33 n=4 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=49 pruub=15.252254486s) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 33'3 mlcod 33'3 active pruub 84.776458740s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 49 pg[9.0( v 41'581 (0'0,41'581] local-lis/les=34/35 n=209 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49 pruub=9.249321938s) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 41'580 mlcod 41'580 active pruub 78.773735046s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 49 pg[8.0( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=49 pruub=15.252254486s) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 33'3 mlcod 0'0 unknown pruub 84.776458740s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 49 pg[9.0( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=6 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49 pruub=9.249321938s) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 41'580 mlcod 0'0 unknown pruub 78.773735046s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 podman[103237]: 2025-10-11 04:28:05.559433299 +0000 UTC m=+0.065519873 container create fbecd7275215a01915bc79dc2f892f4343ec81dec637539192716b01c0a1d089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:28:05 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Oct 11 04:28:05 compute-0 systemd[1]: Started libpod-conmon-fbecd7275215a01915bc79dc2f892f4343ec81dec637539192716b01c0a1d089.scope.
Oct 11 04:28:05 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Oct 11 04:28:05 compute-0 podman[103237]: 2025-10-11 04:28:05.529441656 +0000 UTC m=+0.035528290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:28:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:28:05 compute-0 podman[103237]: 2025-10-11 04:28:05.644828514 +0000 UTC m=+0.150915088 container init fbecd7275215a01915bc79dc2f892f4343ec81dec637539192716b01c0a1d089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:28:05 compute-0 podman[103237]: 2025-10-11 04:28:05.653715014 +0000 UTC m=+0.159801558 container start fbecd7275215a01915bc79dc2f892f4343ec81dec637539192716b01c0a1d089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:28:05 compute-0 podman[103237]: 2025-10-11 04:28:05.65758066 +0000 UTC m=+0.163667244 container attach fbecd7275215a01915bc79dc2f892f4343ec81dec637539192716b01c0a1d089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:28:05 compute-0 serene_feynman[103253]: 167 167
Oct 11 04:28:05 compute-0 systemd[1]: libpod-fbecd7275215a01915bc79dc2f892f4343ec81dec637539192716b01c0a1d089.scope: Deactivated successfully.
Oct 11 04:28:05 compute-0 podman[103237]: 2025-10-11 04:28:05.662459931 +0000 UTC m=+0.168546475 container died fbecd7275215a01915bc79dc2f892f4343ec81dec637539192716b01c0a1d089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:28:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-40b2a0d7b38aff6167f7e0b04471c52d0314fc0b2fff0916266bd4693f6a4e1c-merged.mount: Deactivated successfully.
Oct 11 04:28:05 compute-0 podman[103237]: 2025-10-11 04:28:05.702849531 +0000 UTC m=+0.208936075 container remove fbecd7275215a01915bc79dc2f892f4343ec81dec637539192716b01c0a1d089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_feynman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:28:05 compute-0 systemd[1]: libpod-conmon-fbecd7275215a01915bc79dc2f892f4343ec81dec637539192716b01c0a1d089.scope: Deactivated successfully.
Oct 11 04:28:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct 11 04:28:05 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:28:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct 11 04:28:05 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] update: starting ev bd80b648-1cbe-4a4c-a0de-a02aa7895963 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] complete: finished ev 838b0b51-2289-4e9c-a222-663af58be503 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] Completed event 838b0b51-2289-4e9c-a222-663af58be503 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 9 seconds
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] complete: finished ev 1f9bdf04-28df-4ef8-9252-02cca2858436 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] Completed event 1f9bdf04-28df-4ef8-9252-02cca2858436 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] complete: finished ev 377194cd-55b9-4216-b805-bf664c3641e5 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] Completed event 377194cd-55b9-4216-b805-bf664c3641e5 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] complete: finished ev b41b7061-0237-401e-a7b8-362afd30d781 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] Completed event b41b7061-0237-401e-a7b8-362afd30d781 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] complete: finished ev 557ae095-44ab-4700-8118-ccc431805fa3 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] Completed event 557ae095-44ab-4700-8118-ccc431805fa3 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] complete: finished ev 20c5ff25-907e-4245-b8f4-d18d53a47d97 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] Completed event 20c5ff25-907e-4245-b8f4-d18d53a47d97 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] complete: finished ev 47f7ce2c-dc7b-4d4a-9bc4-b50a51fdcaf5 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] Completed event 47f7ce2c-dc7b-4d4a-9bc4-b50a51fdcaf5 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] complete: finished ev 9a10f351-a9b0-4a4c-bde4-a80bd0dd6cc0 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] Completed event 9a10f351-a9b0-4a4c-bde4-a80bd0dd6cc0 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] complete: finished ev 0dfa6eaa-dc06-4852-abee-2d35df6ff188 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] Completed event 0dfa6eaa-dc06-4852-abee-2d35df6ff188 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] complete: finished ev bd80b648-1cbe-4a4c-a0de-a02aa7895963 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 11 04:28:05 compute-0 ceph-mgr[74542]: [progress INFO root] Completed event bd80b648-1cbe-4a4c-a0de-a02aa7895963 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Oct 11 04:28:05 compute-0 ceph-mon[74243]: 4.1 deep-scrub starts
Oct 11 04:28:05 compute-0 ceph-mon[74243]: 4.1 deep-scrub ok
Oct 11 04:28:05 compute-0 ceph-mon[74243]: 3.3 scrub starts
Oct 11 04:28:05 compute-0 ceph-mon[74243]: 3.3 scrub ok
Oct 11 04:28:05 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:28:05 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:28:05 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:28:05 compute-0 ceph-mon[74243]: osdmap e49: 3 total, 3 up, 3 in
Oct 11 04:28:05 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:28:05 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:28:05 compute-0 ceph-mon[74243]: osdmap e50: 3 total, 3 up, 3 in
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.15( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.14( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.16( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.17( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.17( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.16( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.10( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.13( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.13( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.12( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.d( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.c( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.f( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.8( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.11( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.b( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.3( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.2( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.1( v 33'4 (0'0,33'4] local-lis/les=32/33 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.9( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.1( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.e( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.a( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.8( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.3( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.7( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.6( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.7( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.5( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.4( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.5( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.1a( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.1b( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.19( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.18( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.19( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.1e( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.1f( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.1e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.1c( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.1d( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:05 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.14( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.16( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.10( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.13( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.17( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.12( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.d( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.8( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.a( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.b( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.11( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.3( v 33'4 (0'0,33'4] local-lis/les=49/50 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.2( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.0( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 41'580 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.1( v 33'4 (0'0,33'4] local-lis/les=49/50 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.9( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.a( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=49/50 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.0( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 33'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.3( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.1( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.4( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=49/50 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.5( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.1a( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.5( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.7( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.19( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.1b( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.1e( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 50 pg[9.1d( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:05 compute-0 podman[103276]: 2025-10-11 04:28:05.93417933 +0000 UTC m=+0.056447619 container create d059907db71b1cfa5648819103fb9804bcf65b689ee97473f83c90d5dfdc0af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:28:05 compute-0 systemd[1]: Started libpod-conmon-d059907db71b1cfa5648819103fb9804bcf65b689ee97473f83c90d5dfdc0af7.scope.
Oct 11 04:28:06 compute-0 podman[103276]: 2025-10-11 04:28:05.909589441 +0000 UTC m=+0.031857780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:28:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6f5125fb47907e6e8588bb779ea1218e85a1ba5e60c82c93fb35bf17ad5863/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6f5125fb47907e6e8588bb779ea1218e85a1ba5e60c82c93fb35bf17ad5863/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6f5125fb47907e6e8588bb779ea1218e85a1ba5e60c82c93fb35bf17ad5863/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6f5125fb47907e6e8588bb779ea1218e85a1ba5e60c82c93fb35bf17ad5863/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:06 compute-0 podman[103276]: 2025-10-11 04:28:06.036915934 +0000 UTC m=+0.159184223 container init d059907db71b1cfa5648819103fb9804bcf65b689ee97473f83c90d5dfdc0af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:28:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v106: 243 pgs: 1 peering, 139 unknown, 103 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:06 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 04:28:06 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:06 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 04:28:06 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:06 compute-0 podman[103276]: 2025-10-11 04:28:06.048919681 +0000 UTC m=+0.171188010 container start d059907db71b1cfa5648819103fb9804bcf65b689ee97473f83c90d5dfdc0af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:28:06 compute-0 podman[103276]: 2025-10-11 04:28:06.056456458 +0000 UTC m=+0.178724807 container attach d059907db71b1cfa5648819103fb9804bcf65b689ee97473f83c90d5dfdc0af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:28:06 compute-0 ceph-mgr[74542]: [progress INFO root] Writing back 15 completed events
Oct 11 04:28:06 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 11 04:28:06 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:28:06 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.2 deep-scrub starts
Oct 11 04:28:06 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.2 deep-scrub ok
Oct 11 04:28:06 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct 11 04:28:06 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.5 deep-scrub starts
Oct 11 04:28:06 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:28:06 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:28:06 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct 11 04:28:06 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct 11 04:28:06 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.5 deep-scrub ok
Oct 11 04:28:06 compute-0 ceph-mon[74243]: 2.4 scrub starts
Oct 11 04:28:06 compute-0 ceph-mon[74243]: 2.4 scrub ok
Oct 11 04:28:06 compute-0 ceph-mon[74243]: 3.4 scrub starts
Oct 11 04:28:06 compute-0 ceph-mon[74243]: 3.4 scrub ok
Oct 11 04:28:06 compute-0 ceph-mon[74243]: pgmap v106: 243 pgs: 1 peering, 139 unknown, 103 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:06 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:06 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:06 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:28:06 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 51 pg[10.0( v 37'16 (0'0,37'16] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=51 pruub=9.721211433s) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 37'15 mlcod 37'15 active pruub 75.400329590s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:06 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 51 pg[10.0( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=51 pruub=9.721211433s) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 37'15 mlcod 0'0 unknown pruub 75.400329590s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:06 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=51 pruub=11.755093575s) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active pruub 82.836608887s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:06 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=51 pruub=11.755093575s) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown pruub 82.836608887s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]: {
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:         "osd_id": 1,
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:         "type": "bluestore"
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:     },
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:         "osd_id": 0,
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:         "type": "bluestore"
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:     },
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:         "osd_id": 2,
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:         "type": "bluestore"
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]:     }
Oct 11 04:28:07 compute-0 cool_proskuriakova[103293]: }
Oct 11 04:28:07 compute-0 systemd[1]: libpod-d059907db71b1cfa5648819103fb9804bcf65b689ee97473f83c90d5dfdc0af7.scope: Deactivated successfully.
Oct 11 04:28:07 compute-0 systemd[1]: libpod-d059907db71b1cfa5648819103fb9804bcf65b689ee97473f83c90d5dfdc0af7.scope: Consumed 1.097s CPU time.
Oct 11 04:28:07 compute-0 podman[103276]: 2025-10-11 04:28:07.143485177 +0000 UTC m=+1.265753466 container died d059907db71b1cfa5648819103fb9804bcf65b689ee97473f83c90d5dfdc0af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:28:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b6f5125fb47907e6e8588bb779ea1218e85a1ba5e60c82c93fb35bf17ad5863-merged.mount: Deactivated successfully.
Oct 11 04:28:07 compute-0 podman[103276]: 2025-10-11 04:28:07.209950403 +0000 UTC m=+1.332218692 container remove d059907db71b1cfa5648819103fb9804bcf65b689ee97473f83c90d5dfdc0af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_proskuriakova, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:28:07 compute-0 systemd[1]: libpod-conmon-d059907db71b1cfa5648819103fb9804bcf65b689ee97473f83c90d5dfdc0af7.scope: Deactivated successfully.
Oct 11 04:28:07 compute-0 sudo[103172]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:28:07 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:28:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:28:07 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:28:07 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 0434d7f4-f5f8-4e6d-9ec5-20726d6bc4e3 does not exist
Oct 11 04:28:07 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 6826ed22-b453-4917-9f95-7822c3be5e22 does not exist
Oct 11 04:28:07 compute-0 sudo[103337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:28:07 compute-0 sudo[103337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:07 compute-0 sudo[103337]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:07 compute-0 sudo[103362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:28:07 compute-0 sudo[103362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:07 compute-0 sudo[103362]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:07 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Oct 11 04:28:07 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Oct 11 04:28:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct 11 04:28:07 compute-0 ceph-mon[74243]: 4.2 deep-scrub starts
Oct 11 04:28:07 compute-0 ceph-mon[74243]: 4.2 deep-scrub ok
Oct 11 04:28:07 compute-0 ceph-mon[74243]: 3.5 deep-scrub starts
Oct 11 04:28:07 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:28:07 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:28:07 compute-0 ceph-mon[74243]: osdmap e51: 3 total, 3 up, 3 in
Oct 11 04:28:07 compute-0 ceph-mon[74243]: 3.5 deep-scrub ok
Oct 11 04:28:07 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:28:07 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:28:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct 11 04:28:07 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.12( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.11( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.1f( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.10( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.1d( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.1e( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.1c( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.1a( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.19( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.1b( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.18( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.7( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.6( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.4( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.3( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.8( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.f( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.9( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.a( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.b( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.c( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.5( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.d( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.e( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.1( v 37'16 (0'0,37'16] local-lis/les=36/37 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.2( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.13( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.14( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.15( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.16( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.17( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.16( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.13( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.c( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.a( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.5( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.7( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.1d( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.12( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.11( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.1f( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.1c( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.1e( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.1a( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.16( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.13( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=51/52 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.c( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.5( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.7( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.1d( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.19( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.1d( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.18( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.1b( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.10( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.4( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.7( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.6( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.3( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.8( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.0( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 37'15 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.f( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.c( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.a( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.b( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.d( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.e( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.1( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.2( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.13( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.14( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.15( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.16( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.9( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.17( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 52 pg[10.5( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v109: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:08 compute-0 ceph-mon[74243]: 3.6 scrub starts
Oct 11 04:28:08 compute-0 ceph-mon[74243]: 3.6 scrub ok
Oct 11 04:28:08 compute-0 ceph-mon[74243]: osdmap e52: 3 total, 3 up, 3 in
Oct 11 04:28:08 compute-0 ceph-mon[74243]: pgmap v109: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:09 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Oct 11 04:28:09 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Oct 11 04:28:09 compute-0 ceph-mon[74243]: 3.7 scrub starts
Oct 11 04:28:09 compute-0 ceph-mon[74243]: 3.7 scrub ok
Oct 11 04:28:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v110: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:10 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct 11 04:28:10 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct 11 04:28:10 compute-0 ceph-mon[74243]: pgmap v110: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:10 compute-0 ceph-mon[74243]: 4.3 scrub starts
Oct 11 04:28:10 compute-0 ceph-mon[74243]: 4.3 scrub ok
Oct 11 04:28:11 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.4 deep-scrub starts
Oct 11 04:28:11 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.4 deep-scrub ok
Oct 11 04:28:12 compute-0 ceph-mon[74243]: 4.4 deep-scrub starts
Oct 11 04:28:12 compute-0 ceph-mon[74243]: 4.4 deep-scrub ok
Oct 11 04:28:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v111: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 04:28:12 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 04:28:12 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 04:28:12 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct 11 04:28:12 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 11 04:28:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 04:28:12 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct 11 04:28:12 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 11 04:28:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 04:28:12 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 04:28:12 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 04:28:12 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 04:28:12 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Oct 11 04:28:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Oct 11 04:28:13 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct 11 04:28:13 compute-0 ceph-mon[74243]: pgmap v111: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:13 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:13 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:13 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:13 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 11 04:28:13 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:13 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 11 04:28:13 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:13 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:13 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:13 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:28:13 compute-0 ceph-mon[74243]: 3.8 scrub starts
Oct 11 04:28:13 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:28:13 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:28:13 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:28:13 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 11 04:28:13 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:28:13 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 11 04:28:13 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:28:13 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:28:13 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:28:13 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:28:13 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct 11 04:28:13 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.12( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.883361816s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 82.613998413s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.809285164s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.539932251s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.11( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.883306503s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 82.614006042s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.12( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.883310318s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 82.613998413s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.809218407s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.539932251s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.809211731s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.540054321s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.809192657s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.540054321s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.10( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.891356468s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 82.622299194s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.10( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.891343117s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 82.622299194s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.18( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.771512985s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.502571106s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.18( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.771482468s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.502571106s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.16( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.771437645s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.502662659s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.19( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.771712303s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.502639771s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.16( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.771415710s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.502662659s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.19( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.771361351s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.502639771s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.808565140s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.539901733s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.808523178s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.539901733s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.1e( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.888670921s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 82.620254517s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.11( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.883254051s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 82.614006042s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.1e( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.888639450s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 82.620254517s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.17( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.770956039s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.502494812s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.15( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.771018982s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.502555847s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.17( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.770736694s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.502494812s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.15( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.770759583s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.502555847s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.807885170s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.539939880s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.807803154s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.539886475s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.807854652s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.539939880s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.807783127s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.539886475s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.1a( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.888038635s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 82.620269775s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.1a( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.888017654s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 82.620269775s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.807551384s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.539947510s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.807528496s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.539947510s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.11( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.769957542s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.502479553s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.11( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.769810677s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.502479553s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.807274818s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.539970398s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.807242393s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.539970398s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.807711601s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.540473938s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.807698250s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.540473938s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.7( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.889410973s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 82.622291565s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.7( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.889391899s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 82.622291565s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.f( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.769506454s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.502426147s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.f( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.769479752s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.502426147s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.6( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.889364243s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 82.622390747s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.6( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.889339447s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 82.622390747s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.806946754s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.540054321s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.806921005s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.540054321s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.d( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.769252777s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.502426147s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.d( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.769232750s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.502426147s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.4( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.889010429s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 82.622314453s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.b( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.768290520s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.501602173s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.b( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.768230438s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.501602173s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.4( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.888969421s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 82.622314453s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.8( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.889192581s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 82.622642517s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.8( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.889180183s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 82.622642517s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.806875229s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.540405273s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.f( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.889115334s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 82.622688293s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.806848526s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.540405273s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.f( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.889095306s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 82.622688293s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.19( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.888590813s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 82.622177124s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.7( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.767807961s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.501533508s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.8( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.767581940s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.501327515s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.8( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.767553329s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.501327515s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.806559563s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.540443420s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.806543350s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.540443420s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.7( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.767782211s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.501533508s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.13( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.768618584s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.502647400s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.9( v 52'17 (0'0,52'17] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.888954163s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 37'16 mlcod 37'16 active pruub 82.623069763s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.9( v 52'17 (0'0,52'17] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.888925552s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 37'16 mlcod 0'0 unknown NOTIFY pruub 82.623069763s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.2( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.766964912s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.501258850s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.2( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.766845703s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.501258850s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.13( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.768249512s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.502647400s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.805859566s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.540451050s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.805835724s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.540451050s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.b( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.888098717s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 82.622749329s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.b( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.888066292s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 82.622749329s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.3( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.766546249s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.501266479s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.805542946s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.540260315s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.3( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.766525269s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.501266479s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.805621147s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.540466309s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.4( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.766355515s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.501228333s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.19( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.887703896s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 82.622177124s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.805232048s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.540260315s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.805396080s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.540458679s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.805366516s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.540458679s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.805348396s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.540466309s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.d( v 52'17 (0'0,52'17] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.887540817s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 37'16 mlcod 37'16 active pruub 82.622756958s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.4( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.766036987s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.501228333s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.d( v 52'17 (0'0,52'17] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.887517929s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 37'16 mlcod 0'0 unknown NOTIFY pruub 82.622756958s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.5( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.765973091s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.501220703s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.5( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.765945435s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.501220703s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.805131912s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.540496826s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.805104256s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.540496826s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.e( v 52'17 (0'0,52'17] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.887367249s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 37'16 mlcod 37'16 active pruub 82.622772217s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.e( v 52'17 (0'0,52'17] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.887328148s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 37'16 mlcod 0'0 unknown NOTIFY pruub 82.622772217s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.6( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.766076088s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.501579285s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.6( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.766055107s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.501579285s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.805136681s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.540725708s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.805111885s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.540725708s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.1( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.887149811s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 82.622795105s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.1( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.887131691s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 82.622795105s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.9( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.765379906s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.501121521s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.9( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.765367508s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.501121521s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.2( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.887022018s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 82.622802734s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.2( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.886997223s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 82.622802734s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.a( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.765297890s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.501159668s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.a( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.765278816s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.501159668s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.1b( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.765359879s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.501319885s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.1b( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.765347481s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.501319885s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.13( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.886843681s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 82.622817993s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.13( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.886817932s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 82.622817993s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.14( v 52'17 (0'0,52'17] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.886786461s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 37'16 mlcod 37'16 active pruub 82.622825623s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.14( v 52'17 (0'0,52'17] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.886767387s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 37'16 mlcod 0'0 unknown NOTIFY pruub 82.622825623s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.1c( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.764901161s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.501243591s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.1c( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.764821053s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.501243591s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.15( v 52'17 (0'0,52'17] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.886120796s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 37'16 mlcod 37'16 active pruub 82.622840881s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.1d( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.764449120s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.501174927s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.803750038s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.540527344s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.1d( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.764389038s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.501174927s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.15( v 52'17 (0'0,52'17] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.886052132s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 37'16 mlcod 0'0 unknown NOTIFY pruub 82.622840881s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.16( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.886044502s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 82.622871399s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.803718567s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.540527344s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.16( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.886013031s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 82.622871399s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.803643227s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.540618896s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.17( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.886487961s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 82.623497009s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.1f( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.758274078s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 82.495307922s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.803611755s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.540618896s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[2.1f( empty local-lis/les=43/44 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.758249283s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.495307922s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[10.17( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.886461258s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 82.623497009s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.803519249s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.540740967s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.803484917s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.540740967s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[2.19( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[5.1e( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[2.18( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[10.9( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[10.8( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[5.7( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[2.1d( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[10.15( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[5.11( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[10.4( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[5.4( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[2.1c( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[2.f( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[10.7( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[2.2( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[5.5( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[2.1f( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[10.17( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[10.d( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[2.17( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[5.2( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[5.3( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[10.e( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[5.13( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[2.15( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[2.b( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[2.8( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[5.12( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[10.1a( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[10.1( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[10.1e( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[2.16( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[5.15( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[5.14( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[5.16( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[10.19( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[2.13( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[10.6( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[2.11( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[10.16( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.18( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.797554970s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 94.996032715s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.14( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.797443390s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 94.995971680s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.18( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.797518730s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.996032715s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.14( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.797422409s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.995971680s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.13( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.797350883s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 94.995971680s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.13( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.797335625s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.995971680s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.12( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.797209740s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 94.995880127s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.12( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.797191620s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.995880127s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.11( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.797138214s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 94.995864868s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.11( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.797124863s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.995864868s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.10( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.797057152s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 94.995841980s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.f( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.797019958s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 94.995834351s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.10( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.797039986s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.995841980s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.f( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.797007561s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.995834351s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.e( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.796890259s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 94.995834351s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.792242050s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 96.991210938s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.e( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.796867371s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.995834351s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[5.9( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.d( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.796756744s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 94.995742798s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.792211533s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.991210938s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.d( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.796734810s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.995742798s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.792160034s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 96.991264343s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.792138100s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.991264343s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.2( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.796350479s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 94.995727539s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.1( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.796331406s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 94.995727539s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.2( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.796326637s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.995727539s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.791326523s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 96.990753174s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.1( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.796307564s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.995727539s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.791304588s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.990753174s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.791210175s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 96.990745544s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.4( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.796012878s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 94.995582581s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.791193008s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.990745544s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.4( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.795989990s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.995582581s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.9( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.795934677s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 94.995567322s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.9( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.795914650s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.995567322s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.790573120s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 96.990501404s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.1a( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.795536041s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 94.995475769s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.5( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.795628548s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 94.995597839s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.790547371s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.990501404s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.1a( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.795512199s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.995475769s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.5( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.795604706s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.995597839s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.a( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.795324326s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 94.995429993s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.1b( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.795438766s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 94.995605469s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.1b( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.795418739s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.995605469s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[4.18( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.a( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.795303345s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.995429993s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.789840698s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 96.990066528s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[4.13( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.789818764s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.990066528s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.7( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.794323921s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 94.994613647s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[2.d( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.789695740s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 96.990005493s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.7( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.794305801s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.994613647s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[5.f( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[10.b( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.789672852s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.990005493s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[2.3( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.8( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.794217110s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 94.994598389s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[2.5( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.1c( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.794836044s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 94.995239258s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.8( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.794196129s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.994598389s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[10.2( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[4.1c( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.794818878s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.995239258s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.789367676s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 96.990104675s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.789347649s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.990104675s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[2.a( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[4.11( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[4.e( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[4.1( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[5.c( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[4.1a( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[4.1b( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[2.9( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[4.a( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[2.4( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[4.1c( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[10.f( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[2.7( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[2.6( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[5.1( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[10.11( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[10.10( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[2.1b( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[10.13( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[10.12( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[5.1d( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[5.1a( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[10.14( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[5.18( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[5.19( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.1f( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.743795395s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.896934509s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.1f( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.743758202s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.896934509s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.815040588s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 85.968254089s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.815021515s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.968254089s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.784065247s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.937309265s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.784015656s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.937309265s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.859316826s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.012374878s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.783928871s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.937324524s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.1e( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.750248909s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.903648376s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.783912659s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.937324524s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.858989716s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.012374878s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.1e( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.750216484s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.903648376s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.814669609s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 85.968238831s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.1d( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.750134468s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.903709412s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.814802170s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 85.968399048s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.814787865s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.968399048s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.1d( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.750104904s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.903709412s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.814622879s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.968238831s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.864757538s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.018501282s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.783546448s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.937332153s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.864731789s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.018501282s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.783528328s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.937332153s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.1b( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.749846458s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.903747559s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.783427238s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.937339783s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.783398628s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.937339783s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.1b( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.749794006s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.903747559s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.864333153s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.018409729s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.864300728s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.018409729s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.11( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.823742867s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 85.977867126s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.814386368s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 85.968528748s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.814345360s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.968528748s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.11( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.823658943s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.977867126s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[3.1f( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.864295006s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.018577576s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.814293861s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 85.968597412s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.864249229s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.018577576s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.814262390s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.968597412s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[7.1b( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.814225197s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 85.968643188s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.814156532s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 85.968574524s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[11.17( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.864139557s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.018592834s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.814185143s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.968643188s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.814105988s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.968574524s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.864101410s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.018592834s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.814058304s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 85.968612671s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.864008904s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.018608093s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[11.14( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.814026833s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.968612671s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.863992691s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.018608093s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.18( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.749192238s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.903862000s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.782570839s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.937286377s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.782546997s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.937286377s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.18( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.749140739s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.903862000s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.864174843s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.018966675s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.782293320s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.937118530s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.864145279s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.018966675s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.782273293s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.937118530s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[7.18( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.7( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.749148369s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.904037476s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.d( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.822532654s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 85.977470398s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.813730240s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 85.968688965s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.7( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.749115944s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.904037476s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.d( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.822514534s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.977470398s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.813682556s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.968688965s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.782018661s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.937141418s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.782001495s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.937141418s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.6( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.748780251s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.903945923s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.6( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.748749733s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.903945923s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.822402954s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 85.977630615s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.863402367s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.018653870s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.863388062s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.018653870s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.822379112s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.977630615s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.781746864s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.937095642s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.5( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.748563766s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.903945923s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.781719208s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.937095642s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.5( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.748549461s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.903945923s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.813286781s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 85.968734741s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.813262939s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.968734741s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.813282013s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 85.968772888s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.813266754s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.968772888s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.863141060s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.018669128s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.863111496s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.018669128s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.3( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.748528481s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.904174805s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.9( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.822375298s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 85.978034973s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.3( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.748508453s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.904174805s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.862984657s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.018676758s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.862958908s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.018676758s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.9( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.822350502s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978034973s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.781125069s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.936882019s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.781099319s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.936882019s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.1( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.748088837s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.903953552s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.1( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.748073578s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.903953552s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.780862808s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.936843872s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.780843735s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.936843872s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.b( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.821834564s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 85.977844238s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.b( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.821802139s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.977844238s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.8( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.747843742s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.903961182s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.8( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.747827530s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.903961182s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.780634880s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.936836243s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.862565994s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.018768311s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.a( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.747744560s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.903961182s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.780613899s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.936836243s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.862526894s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.018768311s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[7.1f( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.862369537s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.018722534s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.862349510s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.018722534s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.780382156s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.936775208s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.780356407s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.936775208s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.1( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.821710587s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 85.978240967s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.1( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.821687698s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978240967s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[3.1b( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.821475983s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 85.978080750s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.821448326s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978080750s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.a( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.747143745s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.903961182s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[7.1a( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[3.1e( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[3.1d( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[8.15( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[11.15( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[8.11( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.779613495s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.936782837s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.861615181s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.018798828s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.861460686s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.018638611s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.779592514s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.936782837s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.861589432s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.018798828s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.820858002s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 85.978088379s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.861425400s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.018638611s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.820826530s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978088379s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.779437065s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.936752319s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.779420853s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.936752319s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[11.11( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[8.12( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[3.18( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.861403465s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.018814087s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[7.1c( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[3.7( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.861389160s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.018814087s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.820788383s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 85.978225708s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.9( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.746777534s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.904228210s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=49/50 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.820685387s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 85.978164673s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.820753098s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978225708s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[8.10( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[8.14( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[11.10( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.3( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.820683479s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 85.978210449s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.9( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.746716499s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.904228210s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.3( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.820667267s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978210449s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=49/50 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.820623398s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978164673s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.779047966s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.936676025s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.779033661s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.936676025s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.861189842s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.018844604s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[7.3( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.861158371s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.018844604s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.c( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.746548653s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.904258728s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[11.f( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.c( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.746517181s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.904258728s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[8.c( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.778903961s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.936729431s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.778884888s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.936729431s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.820374489s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 85.978279114s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.820348740s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 85.978279114s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.860912323s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.018867493s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.820343971s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978279114s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.860877991s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.018867493s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[3.6( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[8.e( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[3.3( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[3.1( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.820314407s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978279114s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[7.2( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.778664589s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.936691284s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[7.f( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.e( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.746083260s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.904327393s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.e( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.746062279s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.904327393s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.f( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.745961189s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.904319763s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=49/50 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.819978714s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 85.978347778s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.778635025s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.936691284s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.5( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.819945335s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 85.978355408s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.f( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.745930672s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.904319763s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.5( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.819923401s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978355408s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=49/50 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.819942474s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978347778s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[8.f( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.860315323s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.018890381s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.819785118s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 85.978393555s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[11.d( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.819752693s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978393555s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.778032303s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.936721802s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.860191345s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.018890381s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.777997971s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.936721802s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.860168457s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.018890381s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.11( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.745532036s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.904342651s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.11( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.745502472s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.904342651s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.819508553s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 85.978408813s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.1b( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.819623947s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 85.978576660s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.819476128s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978408813s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.859908104s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.018959045s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.1b( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.819581985s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978576660s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.859876633s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.018959045s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.12( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.745333672s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.904472351s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.860288620s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.018890381s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.12( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.745300293s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.904472351s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.859698296s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.018913269s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.859676361s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.018913269s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.859699249s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.019042969s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.819319725s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 85.978652954s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[3.a( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.819174767s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 85.978523254s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.819144249s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978523254s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.819280624s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978652954s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.859617233s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.019042969s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.818991661s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 85.978569031s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.818959236s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978569031s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.769608498s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.929237366s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.818907738s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 85.978561401s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[11.e( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.769577980s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.929237366s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.15( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.744752884s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.904441833s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.818882942s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978561401s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.15( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.744688034s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.904441833s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.859327316s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.019126892s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.859295845s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.019126892s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.16( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.744569778s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.904457092s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.16( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.744537354s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.904457092s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.818664551s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 85.978645325s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.1d( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.818659782s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 85.978683472s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[9.1d( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.818584442s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978683472s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.858978271s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.019165039s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[8.b( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.858947754s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.019165039s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.818638802s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978645325s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.776211739s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.936653137s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.776185036s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.936653137s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.17( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.741021156s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 88.904434204s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[3.17( empty local-lis/les=43/45 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=11.740994453s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.904434204s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[7.6( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.815037727s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 85.978622437s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.815003395s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.978622437s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[11.1( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[4.14( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[4.12( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[8.d( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[4.f( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[4.10( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[8.9( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[3.5( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[7.1( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[4.d( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[11.b( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[3.9( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[7.5( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[11.4( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[3.c( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[4.2( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[6.3( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[6.1( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[4.4( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[7.9( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[7.c( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[11.9( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[11.6( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[4.9( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[4.5( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[3.8( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[8.6( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[7.e( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[6.9( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[4.7( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[6.5( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[4.8( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[11.2( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[11.3( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[3.f( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[11.8( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 53 pg[6.7( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[8.2( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[11.19( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[8.1a( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[7.8( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[3.e( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[3.12( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[7.a( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[8.4( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[8.18( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[8.1f( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[8.1b( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[3.15( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[7.15( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[3.11( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[8.1d( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[11.1a( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[7.13( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 53 pg[3.17( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[11.1b( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[11.1c( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[11.18( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[7.11( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[11.1e( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[3.16( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[11.1f( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[8.1c( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 53 pg[11.12( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:13 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Oct 11 04:28:13 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Oct 11 04:28:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct 11 04:28:14 compute-0 ceph-mon[74243]: 3.8 scrub ok
Oct 11 04:28:14 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:28:14 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:28:14 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:28:14 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 11 04:28:14 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:28:14 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 11 04:28:14 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:28:14 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:28:14 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:28:14 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:28:14 compute-0 ceph-mon[74243]: osdmap e53: 3 total, 3 up, 3 in
Oct 11 04:28:14 compute-0 ceph-mon[74243]: 4.6 scrub starts
Oct 11 04:28:14 compute-0 ceph-mon[74243]: 4.6 scrub ok
Oct 11 04:28:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct 11 04:28:14 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v114: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.11( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.11( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.d( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.d( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.9( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.9( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.b( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.b( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.5( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.5( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.1( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.1( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.3( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[11.10( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.3( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct 11 04:28:14 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 11 04:28:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct 11 04:28:14 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.1b( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.1b( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.1d( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[9.1d( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[5.19( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[2.18( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[3.1f( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[5.1e( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[2.19( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[11.17( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[7.1f( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[3.1b( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[3.f( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[10.9( v 52'17 lc 37'15 (0'0,52'17] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=52'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[11.4( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[3.c( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[10.8( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[3.1( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[5.7( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[2.1d( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[10.15( v 52'17 lc 37'5 (0'0,52'17] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=52'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[7.18( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[11.14( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[10.4( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[7.9( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[5.4( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[7.4( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[2.f( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[2.1c( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[10.7( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[11.6( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[2.2( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[5.5( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[3.3( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[2.1f( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[10.17( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[5.2( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[11.e( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[3.6( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[5.3( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[10.d( v 52'17 lc 37'9 (0'0,52'17] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=52'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[10.e( v 52'17 lc 37'7 (0'0,52'17] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=52'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[7.3( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[11.f( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[7.f( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[2.b( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[3.a( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[2.8( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[11.1( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[3.9( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[10.1( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[3.17( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[7.13( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[7.6( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[2.16( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[10.1e( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[3.15( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[5.15( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=33'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=53/54 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=53/54 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[5.18( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[5.1a( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[5.1d( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[10.12( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[10.13( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[2.1b( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[10.10( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[10.11( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[5.1( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[2.6( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[6.3( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=38'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[2.7( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[10.f( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[4.2( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[2.4( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[4.4( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[6.d( v 38'39 lc 34'13 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[4.f( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[2.9( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[6.f( v 38'39 lc 34'1 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[4.d( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[5.c( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[10.14( v 52'17 lc 37'13 (0'0,52'17] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=52'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[4.8( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[6.5( v 38'39 lc 34'11 (0'0,38'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[2.a( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[10.2( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[2.5( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[4.5( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[6.7( v 38'39 lc 34'21 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[4.9( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[2.3( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[10.b( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[4.7( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[5.f( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[5.9( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[10.19( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[10.6( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[5.16( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[2.d( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[4.10( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[4.14( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[4.12( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[10.1a( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[5.12( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[5.13( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[2.17( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[5.11( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[5.14( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[11.19( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[2.11( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[10.16( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[2.13( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 54 pg[3.12( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 54 pg[2.15( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:14 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.c deep-scrub starts
Oct 11 04:28:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:14 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.c deep-scrub ok
Oct 11 04:28:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct 11 04:28:15 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 11 04:28:15 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 11 04:28:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct 11 04:28:15 compute-0 ceph-mon[74243]: osdmap e54: 3 total, 3 up, 3 in
Oct 11 04:28:15 compute-0 ceph-mon[74243]: pgmap v114: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:15 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 11 04:28:15 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 11 04:28:15 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct 11 04:28:15 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 55 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.338760376s) [1] r=-1 lpr=55 pi=[47,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 96.990058899s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:15 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 55 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.338729858s) [1] r=-1 lpr=55 pi=[47,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.990058899s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:15 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:15 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 55 pg[6.6( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.338991165s) [1] r=-1 lpr=55 pi=[47,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 96.990722656s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:15 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 55 pg[6.6( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.338959694s) [1] r=-1 lpr=55 pi=[47,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.990722656s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:15 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 55 pg[6.2( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.338981628s) [1] r=-1 lpr=55 pi=[47,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 96.991058350s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:15 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 55 pg[6.e( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.339074135s) [1] r=-1 lpr=55 pi=[47,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 96.991172791s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:15 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 55 pg[6.2( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.338914871s) [1] r=-1 lpr=55 pi=[47,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.991058350s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:15 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 55 pg[6.e( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.338883400s) [1] r=-1 lpr=55 pi=[47,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.991172791s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:15 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:15 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:15 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:15 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 55 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:15 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 55 pg[9.1b( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:15 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 55 pg[9.3( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:15 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 55 pg[9.d( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:15 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 55 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:15 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 55 pg[9.1( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:15 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 55 pg[9.9( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:15 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 55 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:15 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 55 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:15 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 55 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:15 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 55 pg[9.b( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:15 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 55 pg[9.5( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:15 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 55 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:15 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 55 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:15 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 55 pg[9.11( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:15 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 55 pg[9.1d( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v116: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct 11 04:28:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct 11 04:28:16 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 11 04:28:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct 11 04:28:16 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 11 04:28:16 compute-0 ceph-mon[74243]: 2.c deep-scrub starts
Oct 11 04:28:16 compute-0 ceph-mon[74243]: 2.c deep-scrub ok
Oct 11 04:28:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 11 04:28:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 11 04:28:16 compute-0 ceph-mon[74243]: osdmap e55: 3 total, 3 up, 3 in
Oct 11 04:28:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct 11 04:28:16 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct 11 04:28:16 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 56 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.431981087s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 95.607887268s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:16 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 56 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.431826591s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.607887268s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:16 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 56 pg[9.d( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.431783676s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 95.608161926s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:16 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 56 pg[9.d( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.431659698s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.608161926s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:16 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 56 pg[9.3( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.429828644s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 95.608100891s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:16 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 56 pg[9.3( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.429665565s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.608100891s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:16 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 56 pg[9.1b( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.429355621s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 95.608062744s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:16 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 56 pg[9.1b( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.429283142s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.608062744s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 56 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 56 pg[9.1b( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 56 pg[9.1b( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:16 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 56 pg[6.2( v 38'39 (0'0,38'39] local-lis/les=55/56 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 56 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 56 pg[9.3( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 56 pg[9.3( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:16 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 56 pg[6.e( v 38'39 lc 34'19 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:16 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 56 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:16 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 56 pg[6.6( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=55/56 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 56 pg[9.d( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 56 pg[9.d( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:16 compute-0 ceph-mgr[74542]: [progress INFO root] Completed event f5e2ea0f-88e5-41a4-adc7-c8ec0448099b (Global Recovery Event) in 15 seconds
Oct 11 04:28:17 compute-0 ceph-mon[74243]: pgmap v116: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:17 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 11 04:28:17 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 11 04:28:17 compute-0 ceph-mon[74243]: osdmap e56: 3 total, 3 up, 3 in
Oct 11 04:28:17 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct 11 04:28:17 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 11 04:28:17 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 11 04:28:17 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct 11 04:28:17 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.11( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.11( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.5( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.5( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.b( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.b( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.9( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.9( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.1( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.1( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.1d( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.1d( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.428891182s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 95.608627319s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.11( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.428862572s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 95.608734131s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.5( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.428665161s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 95.608551025s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.428381920s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 95.608375549s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.5( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.428538322s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.608551025s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.11( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.428720474s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.608734131s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.428313255s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.608375549s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.b( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.428242683s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 95.608436584s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.428416252s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 95.608406067s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.b( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.428207397s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.608436584s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.428082466s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.608406067s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.428188324s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.608627319s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.9( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.427858353s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 95.608314514s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.9( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.427736282s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.608314514s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=12.987690926s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=38'39 mlcod 38'39 active pruub 94.168334961s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=12.987617493s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 94.168334961s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.427368164s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 95.608268738s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=12.987555504s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=38'39 mlcod 38'39 active pruub 94.168632507s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.427257538s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.608268738s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=12.987492561s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 94.168632507s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=12.987785339s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=38'39 mlcod 38'39 active pruub 94.169036865s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=12.987749100s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 94.169036865s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=12.987565041s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=38'39 mlcod 38'39 active pruub 94.169090271s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=12.987518311s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 94.169090271s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.1( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.426578522s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 95.608245850s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.426361084s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 95.608215332s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.426769257s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 95.608703613s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.426269531s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.608215332s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.1d( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.426637650s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 95.608657837s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.426643372s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.608703613s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[6.3( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.1d( v 41'581 (0'0,41'581] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.426589966s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.608657837s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:17 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 57 pg[9.1( v 41'581 (0'0,41'581] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.426471710s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.608245850s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[6.7( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.3( v 41'581 (0'0,41'581] local-lis/les=56/57 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.d( v 41'581 (0'0,41'581] local-lis/les=56/57 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:17 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 57 pg[9.1b( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:17 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.e deep-scrub starts
Oct 11 04:28:17 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.e deep-scrub ok
Oct 11 04:28:17 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.b scrub starts
Oct 11 04:28:17 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.b scrub ok
Oct 11 04:28:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v119: 305 pgs: 4 active+recovery_wait+remapped, 1 active+recovering+remapped, 4 peering, 7 active+remapped, 289 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 26/245 objects misplaced (10.612%); 955 B/s, 2 keys/s, 24 objects/s recovering
Oct 11 04:28:18 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct 11 04:28:18 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct 11 04:28:18 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 11 04:28:18 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 11 04:28:18 compute-0 ceph-mon[74243]: osdmap e57: 3 total, 3 up, 3 in
Oct 11 04:28:18 compute-0 ceph-mon[74243]: 4.b scrub starts
Oct 11 04:28:18 compute-0 ceph-mon[74243]: 4.b scrub ok
Oct 11 04:28:18 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct 11 04:28:18 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 58 pg[9.11( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:18 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 58 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:18 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 58 pg[9.b( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:18 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 58 pg[6.7( v 38'39 lc 34'21 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:18 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 58 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:18 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 58 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:18 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 58 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:18 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 58 pg[9.9( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:18 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 58 pg[6.3( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=57/58 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=38'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:18 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 58 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:18 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 58 pg[9.5( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:18 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 58 pg[9.1( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:18 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 58 pg[6.f( v 38'39 lc 34'1 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:18 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 58 pg[9.1d( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:18 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 58 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:18 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 58 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:18 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Oct 11 04:28:18 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Oct 11 04:28:18 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.b scrub starts
Oct 11 04:28:19 compute-0 ceph-mon[74243]: 2.e deep-scrub starts
Oct 11 04:28:19 compute-0 ceph-mon[74243]: 2.e deep-scrub ok
Oct 11 04:28:19 compute-0 ceph-mon[74243]: pgmap v119: 305 pgs: 4 active+recovery_wait+remapped, 1 active+recovering+remapped, 4 peering, 7 active+remapped, 289 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 26/245 objects misplaced (10.612%); 955 B/s, 2 keys/s, 24 objects/s recovering
Oct 11 04:28:19 compute-0 ceph-mon[74243]: osdmap e58: 3 total, 3 up, 3 in
Oct 11 04:28:19 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.b scrub ok
Oct 11 04:28:19 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.12 deep-scrub starts
Oct 11 04:28:19 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.12 deep-scrub ok
Oct 11 04:28:19 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.c scrub starts
Oct 11 04:28:19 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.c scrub ok
Oct 11 04:28:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v121: 305 pgs: 4 active+recovery_wait+remapped, 1 active+recovering+remapped, 4 peering, 7 active+remapped, 289 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 26/245 objects misplaced (10.612%); 766 B/s, 2 keys/s, 19 objects/s recovering
Oct 11 04:28:20 compute-0 ceph-mon[74243]: 2.10 scrub starts
Oct 11 04:28:20 compute-0 ceph-mon[74243]: 2.10 scrub ok
Oct 11 04:28:20 compute-0 ceph-mon[74243]: 3.b scrub starts
Oct 11 04:28:20 compute-0 ceph-mon[74243]: 3.b scrub ok
Oct 11 04:28:20 compute-0 ceph-mon[74243]: 4.c scrub starts
Oct 11 04:28:20 compute-0 ceph-mon[74243]: 4.c scrub ok
Oct 11 04:28:20 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Oct 11 04:28:20 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Oct 11 04:28:20 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Oct 11 04:28:20 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Oct 11 04:28:20 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct 11 04:28:20 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct 11 04:28:21 compute-0 ceph-mgr[74542]: [progress INFO root] Writing back 16 completed events
Oct 11 04:28:21 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 11 04:28:21 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:28:21 compute-0 ceph-mon[74243]: 2.12 deep-scrub starts
Oct 11 04:28:21 compute-0 ceph-mon[74243]: 2.12 deep-scrub ok
Oct 11 04:28:21 compute-0 ceph-mon[74243]: pgmap v121: 305 pgs: 4 active+recovery_wait+remapped, 1 active+recovering+remapped, 4 peering, 7 active+remapped, 289 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 26/245 objects misplaced (10.612%); 766 B/s, 2 keys/s, 19 objects/s recovering
Oct 11 04:28:21 compute-0 ceph-mon[74243]: 4.15 scrub starts
Oct 11 04:28:21 compute-0 ceph-mon[74243]: 4.15 scrub ok
Oct 11 04:28:21 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:28:21 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Oct 11 04:28:21 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Oct 11 04:28:21 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Oct 11 04:28:21 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Oct 11 04:28:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v122: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 844 B/s, 3 keys/s, 21 objects/s recovering
Oct 11 04:28:22 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct 11 04:28:22 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 11 04:28:22 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct 11 04:28:22 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 11 04:28:22 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct 11 04:28:22 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 11 04:28:22 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 11 04:28:22 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct 11 04:28:22 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct 11 04:28:22 compute-0 ceph-mon[74243]: 2.14 scrub starts
Oct 11 04:28:22 compute-0 ceph-mon[74243]: 2.14 scrub ok
Oct 11 04:28:22 compute-0 ceph-mon[74243]: 3.d scrub starts
Oct 11 04:28:22 compute-0 ceph-mon[74243]: 3.d scrub ok
Oct 11 04:28:22 compute-0 ceph-mon[74243]: 4.16 scrub starts
Oct 11 04:28:22 compute-0 ceph-mon[74243]: 4.16 scrub ok
Oct 11 04:28:22 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 11 04:28:22 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 11 04:28:22 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct 11 04:28:22 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct 11 04:28:23 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 59 pg[6.4( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=59 pruub=12.655724525s) [1] r=-1 lpr=59 pi=[47,59)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 104.990325928s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:23 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 59 pg[6.4( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=59 pruub=12.655656815s) [1] r=-1 lpr=59 pi=[47,59)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.990325928s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:23 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 59 pg[6.c( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=59 pruub=12.655759811s) [1] r=-1 lpr=59 pi=[47,59)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 104.991310120s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:23 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 59 pg[6.c( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=59 pruub=12.655717850s) [1] r=-1 lpr=59 pi=[47,59)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.991310120s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:23 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 59 pg[6.c( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=59) [1] r=0 lpr=59 pi=[47,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:23 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 59 pg[6.4( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=59) [1] r=0 lpr=59 pi=[47,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct 11 04:28:23 compute-0 ceph-mon[74243]: 2.1a scrub starts
Oct 11 04:28:23 compute-0 ceph-mon[74243]: 2.1a scrub ok
Oct 11 04:28:23 compute-0 ceph-mon[74243]: pgmap v122: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 844 B/s, 3 keys/s, 21 objects/s recovering
Oct 11 04:28:23 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 11 04:28:23 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 11 04:28:23 compute-0 ceph-mon[74243]: osdmap e59: 3 total, 3 up, 3 in
Oct 11 04:28:23 compute-0 ceph-mon[74243]: 4.17 scrub starts
Oct 11 04:28:23 compute-0 ceph-mon[74243]: 4.17 scrub ok
Oct 11 04:28:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct 11 04:28:23 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct 11 04:28:23 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 60 pg[6.4( v 38'39 lc 34'15 (0'0,38'39] local-lis/les=59/60 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=59) [1] r=0 lpr=59 pi=[47,59)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:23 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 60 pg[6.c( v 38'39 lc 34'17 (0'0,38'39] local-lis/les=59/60 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=59) [1] r=0 lpr=59 pi=[47,59)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:23 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Oct 11 04:28:23 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Oct 11 04:28:23 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.19 deep-scrub starts
Oct 11 04:28:23 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.19 deep-scrub ok
Oct 11 04:28:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v125: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 205 B/s, 1 keys/s, 5 objects/s recovering
Oct 11 04:28:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct 11 04:28:24 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 11 04:28:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct 11 04:28:24 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 11 04:28:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct 11 04:28:24 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 11 04:28:24 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 11 04:28:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct 11 04:28:24 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct 11 04:28:24 compute-0 ceph-mon[74243]: osdmap e60: 3 total, 3 up, 3 in
Oct 11 04:28:24 compute-0 ceph-mon[74243]: 4.19 deep-scrub starts
Oct 11 04:28:24 compute-0 ceph-mon[74243]: 4.19 deep-scrub ok
Oct 11 04:28:24 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 11 04:28:24 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 11 04:28:24 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 61 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=13.822314262s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=38'39 mlcod 38'39 active pruub 102.169158936s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:24 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 61 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=13.822219849s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 102.169158936s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:24 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 61 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=13.821963310s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=38'39 mlcod 38'39 active pruub 102.169288635s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:24 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 61 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=13.821911812s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 102.169288635s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:24 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 61 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:24 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 61 pg[6.5( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:24 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct 11 04:28:24 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct 11 04:28:24 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.1d deep-scrub starts
Oct 11 04:28:24 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.1d deep-scrub ok
Oct 11 04:28:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:25 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct 11 04:28:25 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct 11 04:28:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct 11 04:28:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct 11 04:28:25 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct 11 04:28:25 compute-0 ceph-mon[74243]: 2.1e scrub starts
Oct 11 04:28:25 compute-0 ceph-mon[74243]: 2.1e scrub ok
Oct 11 04:28:25 compute-0 ceph-mon[74243]: pgmap v125: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 205 B/s, 1 keys/s, 5 objects/s recovering
Oct 11 04:28:25 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 11 04:28:25 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 11 04:28:25 compute-0 ceph-mon[74243]: osdmap e61: 3 total, 3 up, 3 in
Oct 11 04:28:25 compute-0 ceph-mon[74243]: 4.1d deep-scrub starts
Oct 11 04:28:25 compute-0 ceph-mon[74243]: 4.1d deep-scrub ok
Oct 11 04:28:25 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 62 pg[6.5( v 38'39 lc 34'11 (0'0,38'39] local-lis/les=61/62 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:25 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 62 pg[6.d( v 38'39 lc 34'13 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v128: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct 11 04:28:26 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 11 04:28:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct 11 04:28:26 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 11 04:28:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:28:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:28:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:28:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:28:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:28:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:28:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct 11 04:28:26 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 11 04:28:26 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 11 04:28:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct 11 04:28:26 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct 11 04:28:26 compute-0 ceph-mon[74243]: 5.6 scrub starts
Oct 11 04:28:26 compute-0 ceph-mon[74243]: 5.6 scrub ok
Oct 11 04:28:26 compute-0 ceph-mon[74243]: 3.10 scrub starts
Oct 11 04:28:26 compute-0 ceph-mon[74243]: 3.10 scrub ok
Oct 11 04:28:26 compute-0 ceph-mon[74243]: osdmap e62: 3 total, 3 up, 3 in
Oct 11 04:28:26 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 11 04:28:26 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 11 04:28:27 compute-0 ceph-mon[74243]: pgmap v128: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:27 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 11 04:28:27 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 11 04:28:27 compute-0 ceph-mon[74243]: osdmap e63: 3 total, 3 up, 3 in
Oct 11 04:28:27 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Oct 11 04:28:27 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Oct 11 04:28:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v130: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 357 B/s, 1 objects/s recovering
Oct 11 04:28:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct 11 04:28:28 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 11 04:28:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct 11 04:28:28 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 11 04:28:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct 11 04:28:28 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 11 04:28:28 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 11 04:28:28 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 11 04:28:28 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 11 04:28:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct 11 04:28:28 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct 11 04:28:28 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.785065651s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 active pruub 111.256408691s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:28 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.785070419s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 active pruub 111.256507874s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:28 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784944534s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256408691s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:28 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.785003662s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256507874s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:28 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.785189629s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 active pruub 111.256446838s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:28 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784698486s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 active pruub 111.256660461s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:28 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784675598s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256660461s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:28 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784265518s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256446838s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:28 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64) [2] r=0 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:28 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64) [2] r=0 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:28 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64) [2] r=0 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:28 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64) [2] r=0 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:28 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.a scrub starts
Oct 11 04:28:28 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.a scrub ok
Oct 11 04:28:28 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.441020012s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 101.968696594s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:28 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.450304985s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 101.978370667s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:28 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.450239182s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978370667s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:28 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.440475464s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.968696594s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:28 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.450131416s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 101.978530884s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:28 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.450095177s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 101.978752136s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:28 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63) [2] r=0 lpr=64 pi=[49,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:28 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.450055122s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978752136s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:28 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.449581146s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978530884s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:28 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63) [2] r=0 lpr=64 pi=[49,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:28 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63) [2] r=0 lpr=64 pi=[49,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:28 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63) [2] r=0 lpr=64 pi=[49,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:29 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Oct 11 04:28:29 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Oct 11 04:28:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct 11 04:28:29 compute-0 ceph-mon[74243]: 5.8 scrub starts
Oct 11 04:28:29 compute-0 ceph-mon[74243]: 5.8 scrub ok
Oct 11 04:28:29 compute-0 ceph-mon[74243]: pgmap v130: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 357 B/s, 1 objects/s recovering
Oct 11 04:28:29 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 11 04:28:29 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 11 04:28:29 compute-0 ceph-mon[74243]: osdmap e64: 3 total, 3 up, 3 in
Oct 11 04:28:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct 11 04:28:29 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct 11 04:28:29 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:29 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:29 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:29 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:29 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:29 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:29 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:29 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:29 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:29 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:29 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:29 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:29 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:29 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:29 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:29 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:29 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:29 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:29 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:29 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:29 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:29 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:29 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:29 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:29 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:29 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:29 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:29 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:29 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:29 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:29 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:29 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v133: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 359 B/s, 1 objects/s recovering
Oct 11 04:28:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct 11 04:28:30 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 11 04:28:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct 11 04:28:30 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 11 04:28:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct 11 04:28:30 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 11 04:28:30 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 11 04:28:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct 11 04:28:30 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct 11 04:28:30 compute-0 ceph-mon[74243]: 5.a scrub starts
Oct 11 04:28:30 compute-0 ceph-mon[74243]: 5.a scrub ok
Oct 11 04:28:30 compute-0 ceph-mon[74243]: 3.13 scrub starts
Oct 11 04:28:30 compute-0 ceph-mon[74243]: 3.13 scrub ok
Oct 11 04:28:30 compute-0 ceph-mon[74243]: osdmap e65: 3 total, 3 up, 3 in
Oct 11 04:28:30 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 11 04:28:30 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 11 04:28:30 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=13.493541718s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 112.991004944s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:30 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=13.493486404s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.991004944s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:30 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=15.534359932s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 109.978454590s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:30 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=15.534199715s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 109.978454590s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:30 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=15.534417152s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 109.978889465s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:30 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=15.534354210s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 109.978889465s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:30 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[6.8( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:30 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:30 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:30 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:30 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:30 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:30 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:30 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:30 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:30 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:30 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct 11 04:28:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct 11 04:28:31 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct 11 04:28:31 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.012018204s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 110.468261719s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:31 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.011908531s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468261719s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:31 compute-0 ceph-mon[74243]: pgmap v133: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 359 B/s, 1 objects/s recovering
Oct 11 04:28:31 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 11 04:28:31 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 11 04:28:31 compute-0 ceph-mon[74243]: osdmap e66: 3 total, 3 up, 3 in
Oct 11 04:28:31 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.010913849s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 110.468299866s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:31 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.010800362s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468299866s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:31 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=14.998461723s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 110.456581116s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:31 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:31 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:31 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=14.998378754s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.456581116s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:31 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:31 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.009805679s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 110.468399048s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:31 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.009521484s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468399048s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:31 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:31 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.003335953s) [2] async=[2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 41'581 active pruub 115.523399353s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:31 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.003097534s) [2] async=[2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 41'581 active pruub 115.523208618s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:31 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.003224373s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523399353s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:31 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.003015518s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523208618s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:31 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.001225471s) [2] async=[2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 41'581 active pruub 115.523338318s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:31 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.001125336s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523338318s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:31 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.001211166s) [2] async=[2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 41'581 active pruub 115.523513794s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:31 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.001120567s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523513794s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=66/67 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:31 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.1e deep-scrub starts
Oct 11 04:28:31 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.1e deep-scrub ok
Oct 11 04:28:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v136: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:32 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct 11 04:28:32 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct 11 04:28:32 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct 11 04:28:32 compute-0 ceph-mon[74243]: osdmap e67: 3 total, 3 up, 3 in
Oct 11 04:28:32 compute-0 ceph-mon[74243]: 4.1e deep-scrub starts
Oct 11 04:28:32 compute-0 ceph-mon[74243]: 4.1e deep-scrub ok
Oct 11 04:28:32 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:32 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:32 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:32 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:32 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:32 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:32 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:32 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:32 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:32 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:32 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct 11 04:28:32 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct 11 04:28:33 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct 11 04:28:33 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct 11 04:28:33 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct 11 04:28:33 compute-0 ceph-mon[74243]: pgmap v136: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:33 compute-0 ceph-mon[74243]: osdmap e68: 3 total, 3 up, 3 in
Oct 11 04:28:33 compute-0 ceph-mon[74243]: 4.1f scrub starts
Oct 11 04:28:33 compute-0 ceph-mon[74243]: 4.1f scrub ok
Oct 11 04:28:33 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.990410805s) [2] async=[2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 112.487251282s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:33 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.990229607s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.487251282s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:33 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.990164757s) [2] async=[2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 112.487251282s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:33 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.990109444s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.487251282s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:33 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:33 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:33 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:33 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:33 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Oct 11 04:28:33 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Oct 11 04:28:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v139: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 192 B/s, 11 objects/s recovering
Oct 11 04:28:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct 11 04:28:34 compute-0 ceph-mon[74243]: osdmap e69: 3 total, 3 up, 3 in
Oct 11 04:28:34 compute-0 ceph-mon[74243]: 3.14 scrub starts
Oct 11 04:28:34 compute-0 ceph-mon[74243]: pgmap v139: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 192 B/s, 11 objects/s recovering
Oct 11 04:28:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct 11 04:28:34 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct 11 04:28:34 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 70 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=69/70 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:34 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 70 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=69/70 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:34 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Oct 11 04:28:34 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Oct 11 04:28:34 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Oct 11 04:28:34 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Oct 11 04:28:35 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.b deep-scrub starts
Oct 11 04:28:35 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.b deep-scrub ok
Oct 11 04:28:35 compute-0 ceph-mon[74243]: 3.14 scrub ok
Oct 11 04:28:35 compute-0 ceph-mon[74243]: osdmap e70: 3 total, 3 up, 3 in
Oct 11 04:28:35 compute-0 ceph-mon[74243]: 5.1e scrub starts
Oct 11 04:28:35 compute-0 ceph-mon[74243]: 5.1e scrub ok
Oct 11 04:28:35 compute-0 ceph-mon[74243]: 3.19 scrub starts
Oct 11 04:28:35 compute-0 ceph-mon[74243]: 3.19 scrub ok
Oct 11 04:28:35 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Oct 11 04:28:35 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Oct 11 04:28:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v141: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 163 B/s, 9 objects/s recovering
Oct 11 04:28:36 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.d scrub starts
Oct 11 04:28:36 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.d scrub ok
Oct 11 04:28:36 compute-0 ceph-mon[74243]: 5.b deep-scrub starts
Oct 11 04:28:36 compute-0 ceph-mon[74243]: 5.b deep-scrub ok
Oct 11 04:28:36 compute-0 ceph-mon[74243]: 2.19 scrub starts
Oct 11 04:28:36 compute-0 ceph-mon[74243]: pgmap v141: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 163 B/s, 9 objects/s recovering
Oct 11 04:28:36 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Oct 11 04:28:36 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Oct 11 04:28:37 compute-0 ceph-mon[74243]: 2.19 scrub ok
Oct 11 04:28:37 compute-0 ceph-mon[74243]: 5.d scrub starts
Oct 11 04:28:37 compute-0 ceph-mon[74243]: 5.d scrub ok
Oct 11 04:28:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v142: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 164 B/s, 10 objects/s recovering
Oct 11 04:28:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct 11 04:28:38 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 11 04:28:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct 11 04:28:38 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 11 04:28:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct 11 04:28:38 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 11 04:28:38 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 11 04:28:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct 11 04:28:38 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct 11 04:28:38 compute-0 ceph-mon[74243]: 2.18 scrub starts
Oct 11 04:28:38 compute-0 ceph-mon[74243]: 2.18 scrub ok
Oct 11 04:28:38 compute-0 ceph-mon[74243]: pgmap v142: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 164 B/s, 10 objects/s recovering
Oct 11 04:28:38 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 11 04:28:38 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 11 04:28:38 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct 11 04:28:38 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.1a deep-scrub starts
Oct 11 04:28:38 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct 11 04:28:38 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.1a deep-scrub ok
Oct 11 04:28:38 compute-0 sudo[103410]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyrjipoqqificeqfzfpbuhgijxnzzhgp ; /usr/bin/python3'
Oct 11 04:28:38 compute-0 sudo[103410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:28:39 compute-0 python3[103412]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:28:39 compute-0 podman[103413]: 2025-10-11 04:28:39.242704225 +0000 UTC m=+0.061074444 container create c8e80a1c7eb4d822142a6c46dbc356e662812a9a99bb15111ece08526641c863 (image=quay.io/ceph/ceph:v18, name=eloquent_shamir, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:28:39 compute-0 systemd[1]: Started libpod-conmon-c8e80a1c7eb4d822142a6c46dbc356e662812a9a99bb15111ece08526641c863.scope.
Oct 11 04:28:39 compute-0 podman[103413]: 2025-10-11 04:28:39.224805751 +0000 UTC m=+0.043175950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:28:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d9963fabf93ba5fb92133125ad4f63ebc13d106437c6088f2f4fab1ae00dc5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d9963fabf93ba5fb92133125ad4f63ebc13d106437c6088f2f4fab1ae00dc5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:39 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.e scrub starts
Oct 11 04:28:39 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.e scrub ok
Oct 11 04:28:39 compute-0 podman[103413]: 2025-10-11 04:28:39.354893473 +0000 UTC m=+0.173263692 container init c8e80a1c7eb4d822142a6c46dbc356e662812a9a99bb15111ece08526641c863 (image=quay.io/ceph/ceph:v18, name=eloquent_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:28:39 compute-0 podman[103413]: 2025-10-11 04:28:39.365448514 +0000 UTC m=+0.183818723 container start c8e80a1c7eb4d822142a6c46dbc356e662812a9a99bb15111ece08526641c863 (image=quay.io/ceph/ceph:v18, name=eloquent_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:28:39 compute-0 podman[103413]: 2025-10-11 04:28:39.369155966 +0000 UTC m=+0.187526175 container attach c8e80a1c7eb4d822142a6c46dbc356e662812a9a99bb15111ece08526641c863 (image=quay.io/ceph/ceph:v18, name=eloquent_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:28:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 11 04:28:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 11 04:28:39 compute-0 ceph-mon[74243]: osdmap e71: 3 total, 3 up, 3 in
Oct 11 04:28:39 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 71 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71 pruub=14.599646568s) [0] r=-1 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 118.169853210s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:39 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 71 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71 pruub=14.599572182s) [0] r=-1 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.169853210s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:39 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 71 pg[6.9( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:39 compute-0 eloquent_shamir[103429]: could not fetch user info: no user info saved
Oct 11 04:28:39 compute-0 systemd[1]: libpod-c8e80a1c7eb4d822142a6c46dbc356e662812a9a99bb15111ece08526641c863.scope: Deactivated successfully.
Oct 11 04:28:39 compute-0 podman[103413]: 2025-10-11 04:28:39.627501684 +0000 UTC m=+0.445871893 container died c8e80a1c7eb4d822142a6c46dbc356e662812a9a99bb15111ece08526641c863 (image=quay.io/ceph/ceph:v18, name=eloquent_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:28:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-07d9963fabf93ba5fb92133125ad4f63ebc13d106437c6088f2f4fab1ae00dc5-merged.mount: Deactivated successfully.
Oct 11 04:28:39 compute-0 podman[103413]: 2025-10-11 04:28:39.680202629 +0000 UTC m=+0.498572848 container remove c8e80a1c7eb4d822142a6c46dbc356e662812a9a99bb15111ece08526641c863 (image=quay.io/ceph/ceph:v18, name=eloquent_shamir, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:28:39 compute-0 systemd[1]: libpod-conmon-c8e80a1c7eb4d822142a6c46dbc356e662812a9a99bb15111ece08526641c863.scope: Deactivated successfully.
Oct 11 04:28:39 compute-0 sudo[103410]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:39 compute-0 sudo[103549]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dorgfbzorjrqdzadandxproaqaloyjyw ; /usr/bin/python3'
Oct 11 04:28:39 compute-0 sudo[103549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:28:39 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Oct 11 04:28:39 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Oct 11 04:28:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v144: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 32 B/s, 2 objects/s recovering
Oct 11 04:28:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct 11 04:28:40 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 11 04:28:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct 11 04:28:40 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 11 04:28:40 compute-0 python3[103551]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:28:40 compute-0 podman[103552]: 2025-10-11 04:28:40.12681431 +0000 UTC m=+0.044715389 container create 2f8d8dbe59b1c3ea2e74145ef520264e0ee8e1e2d601cbc4cbd705ee1a8a6dda (image=quay.io/ceph/ceph:v18, name=gracious_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:28:40 compute-0 systemd[1]: Started libpod-conmon-2f8d8dbe59b1c3ea2e74145ef520264e0ee8e1e2d601cbc4cbd705ee1a8a6dda.scope.
Oct 11 04:28:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:28:40 compute-0 podman[103552]: 2025-10-11 04:28:40.106587959 +0000 UTC m=+0.024489038 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:28:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6534d4d3d8d8a208ff75b8cee9a391698712b59cd2f430391729c30cfd3544a3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6534d4d3d8d8a208ff75b8cee9a391698712b59cd2f430391729c30cfd3544a3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:40 compute-0 podman[103552]: 2025-10-11 04:28:40.224841317 +0000 UTC m=+0.142742416 container init 2f8d8dbe59b1c3ea2e74145ef520264e0ee8e1e2d601cbc4cbd705ee1a8a6dda (image=quay.io/ceph/ceph:v18, name=gracious_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:28:40 compute-0 podman[103552]: 2025-10-11 04:28:40.233908842 +0000 UTC m=+0.151809881 container start 2f8d8dbe59b1c3ea2e74145ef520264e0ee8e1e2d601cbc4cbd705ee1a8a6dda (image=quay.io/ceph/ceph:v18, name=gracious_mendeleev, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:28:40 compute-0 podman[103552]: 2025-10-11 04:28:40.236860795 +0000 UTC m=+0.154761874 container attach 2f8d8dbe59b1c3ea2e74145ef520264e0ee8e1e2d601cbc4cbd705ee1a8a6dda (image=quay.io/ceph/ceph:v18, name=gracious_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]: {
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     "user_id": "openstack",
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     "display_name": "openstack",
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     "email": "",
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     "suspended": 0,
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     "max_buckets": 1000,
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     "subusers": [],
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     "keys": [
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:         {
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:             "user": "openstack",
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:             "access_key": "3C79L5Q5O92E7EJH5V55",
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:             "secret_key": "QTSdEU2Bq5Ad6zxTukDwgKBNNZ7tUHkvchzwfr4b"
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:         }
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     ],
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     "swift_keys": [],
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     "caps": [],
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     "op_mask": "read, write, delete",
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     "default_placement": "",
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     "default_storage_class": "",
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     "placement_tags": [],
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     "bucket_quota": {
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:         "enabled": false,
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:         "check_on_raw": false,
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:         "max_size": -1,
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:         "max_size_kb": 0,
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:         "max_objects": -1
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     },
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     "user_quota": {
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:         "enabled": false,
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:         "check_on_raw": false,
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:         "max_size": -1,
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:         "max_size_kb": 0,
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:         "max_objects": -1
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     },
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     "temp_url_keys": [],
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     "type": "rgw",
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]:     "mfa_ids": []
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]: }
Oct 11 04:28:40 compute-0 gracious_mendeleev[103567]: 
Oct 11 04:28:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct 11 04:28:40 compute-0 ceph-mon[74243]: 10.9 scrub starts
Oct 11 04:28:40 compute-0 ceph-mon[74243]: 3.1a deep-scrub starts
Oct 11 04:28:40 compute-0 ceph-mon[74243]: 10.9 scrub ok
Oct 11 04:28:40 compute-0 ceph-mon[74243]: 3.1a deep-scrub ok
Oct 11 04:28:40 compute-0 ceph-mon[74243]: 5.e scrub starts
Oct 11 04:28:40 compute-0 ceph-mon[74243]: 5.e scrub ok
Oct 11 04:28:40 compute-0 ceph-mon[74243]: pgmap v144: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 32 B/s, 2 objects/s recovering
Oct 11 04:28:40 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 11 04:28:40 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 11 04:28:40 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 11 04:28:40 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 11 04:28:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct 11 04:28:40 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct 11 04:28:40 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=71/72 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:40 compute-0 systemd[1]: libpod-2f8d8dbe59b1c3ea2e74145ef520264e0ee8e1e2d601cbc4cbd705ee1a8a6dda.scope: Deactivated successfully.
Oct 11 04:28:40 compute-0 podman[103552]: 2025-10-11 04:28:40.512466521 +0000 UTC m=+0.430367550 container died 2f8d8dbe59b1c3ea2e74145ef520264e0ee8e1e2d601cbc4cbd705ee1a8a6dda (image=quay.io/ceph/ceph:v18, name=gracious_mendeleev, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:28:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-6534d4d3d8d8a208ff75b8cee9a391698712b59cd2f430391729c30cfd3544a3-merged.mount: Deactivated successfully.
Oct 11 04:28:40 compute-0 podman[103552]: 2025-10-11 04:28:40.549095208 +0000 UTC m=+0.466996257 container remove 2f8d8dbe59b1c3ea2e74145ef520264e0ee8e1e2d601cbc4cbd705ee1a8a6dda (image=quay.io/ceph/ceph:v18, name=gracious_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:28:40 compute-0 systemd[1]: libpod-conmon-2f8d8dbe59b1c3ea2e74145ef520264e0ee8e1e2d601cbc4cbd705ee1a8a6dda.scope: Deactivated successfully.
Oct 11 04:28:40 compute-0 sudo[103549]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:40 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72 pruub=15.356689453s) [0] r=-1 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 120.185195923s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:40 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72 pruub=15.356639862s) [0] r=-1 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.185195923s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:40 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:40 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Oct 11 04:28:40 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Oct 11 04:28:41 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct 11 04:28:41 compute-0 ceph-mon[74243]: 10.8 scrub starts
Oct 11 04:28:41 compute-0 ceph-mon[74243]: 10.8 scrub ok
Oct 11 04:28:41 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 11 04:28:41 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 11 04:28:41 compute-0 ceph-mon[74243]: osdmap e72: 3 total, 3 up, 3 in
Oct 11 04:28:41 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct 11 04:28:41 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct 11 04:28:41 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 73 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=72/73 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:42 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Oct 11 04:28:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v147: 305 pgs: 1 activating, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 170 B/s wr, 2 op/s; 36 B/s, 2 objects/s recovering
Oct 11 04:28:42 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Oct 11 04:28:42 compute-0 ceph-mon[74243]: 5.7 scrub starts
Oct 11 04:28:42 compute-0 ceph-mon[74243]: 5.7 scrub ok
Oct 11 04:28:42 compute-0 ceph-mon[74243]: osdmap e73: 3 total, 3 up, 3 in
Oct 11 04:28:42 compute-0 ceph-mon[74243]: pgmap v147: 305 pgs: 1 activating, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 170 B/s wr, 2 op/s; 36 B/s, 2 objects/s recovering
Oct 11 04:28:43 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.7 deep-scrub starts
Oct 11 04:28:43 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.7 deep-scrub ok
Oct 11 04:28:43 compute-0 ceph-mon[74243]: 3.1c scrub starts
Oct 11 04:28:43 compute-0 ceph-mon[74243]: 3.1c scrub ok
Oct 11 04:28:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v148: 305 pgs: 1 activating, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 2 op/s
Oct 11 04:28:44 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct 11 04:28:44 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct 11 04:28:44 compute-0 ceph-mon[74243]: 7.7 deep-scrub starts
Oct 11 04:28:44 compute-0 ceph-mon[74243]: 7.7 deep-scrub ok
Oct 11 04:28:44 compute-0 ceph-mon[74243]: pgmap v148: 305 pgs: 1 activating, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 2 op/s
Oct 11 04:28:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:44 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Oct 11 04:28:44 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Oct 11 04:28:45 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct 11 04:28:45 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct 11 04:28:45 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Oct 11 04:28:45 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Oct 11 04:28:45 compute-0 ceph-mon[74243]: 5.10 scrub starts
Oct 11 04:28:45 compute-0 ceph-mon[74243]: 5.10 scrub ok
Oct 11 04:28:45 compute-0 ceph-mon[74243]: 2.1d scrub starts
Oct 11 04:28:45 compute-0 ceph-mon[74243]: 2.1d scrub ok
Oct 11 04:28:46 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct 11 04:28:46 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct 11 04:28:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 1 activating, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 268 B/s wr, 2 op/s
Oct 11 04:28:46 compute-0 ceph-mon[74243]: 7.b scrub starts
Oct 11 04:28:46 compute-0 ceph-mon[74243]: 7.b scrub ok
Oct 11 04:28:46 compute-0 ceph-mon[74243]: 5.17 scrub starts
Oct 11 04:28:46 compute-0 ceph-mon[74243]: 5.17 scrub ok
Oct 11 04:28:46 compute-0 ceph-mon[74243]: pgmap v149: 305 pgs: 1 activating, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 268 B/s wr, 2 op/s
Oct 11 04:28:47 compute-0 ceph-mon[74243]: 7.d scrub starts
Oct 11 04:28:47 compute-0 ceph-mon[74243]: 7.d scrub ok
Oct 11 04:28:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v150: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s
Oct 11 04:28:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct 11 04:28:48 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 11 04:28:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct 11 04:28:48 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 11 04:28:48 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct 11 04:28:48 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct 11 04:28:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct 11 04:28:48 compute-0 ceph-mon[74243]: pgmap v150: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s
Oct 11 04:28:48 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 11 04:28:48 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 11 04:28:48 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 11 04:28:48 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 11 04:28:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct 11 04:28:48 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct 11 04:28:48 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 74 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74 pruub=9.223915100s) [1] r=-1 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 38'39 active pruub 127.256706238s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:48 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 74 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74 pruub=9.223128319s) [1] r=-1 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 127.256706238s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:48 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:49 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Oct 11 04:28:49 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Oct 11 04:28:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct 11 04:28:49 compute-0 ceph-mon[74243]: 5.1b scrub starts
Oct 11 04:28:49 compute-0 ceph-mon[74243]: 5.1b scrub ok
Oct 11 04:28:49 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 11 04:28:49 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 11 04:28:49 compute-0 ceph-mon[74243]: osdmap e74: 3 total, 3 up, 3 in
Oct 11 04:28:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct 11 04:28:49 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct 11 04:28:49 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 75 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=74/75 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:50 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Oct 11 04:28:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v153: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 127 B/s wr, 0 op/s
Oct 11 04:28:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct 11 04:28:50 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 11 04:28:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct 11 04:28:50 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 11 04:28:50 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Oct 11 04:28:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct 11 04:28:50 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 11 04:28:50 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 11 04:28:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct 11 04:28:50 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct 11 04:28:50 compute-0 ceph-mon[74243]: 7.10 scrub starts
Oct 11 04:28:50 compute-0 ceph-mon[74243]: 7.10 scrub ok
Oct 11 04:28:50 compute-0 ceph-mon[74243]: osdmap e75: 3 total, 3 up, 3 in
Oct 11 04:28:50 compute-0 ceph-mon[74243]: pgmap v153: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 127 B/s wr, 0 op/s
Oct 11 04:28:50 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 11 04:28:50 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 11 04:28:51 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76 pruub=10.739331245s) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 125.969543457s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:51 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76 pruub=10.739258766s) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 125.969543457s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:51 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76 pruub=10.747873306s) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 125.979316711s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:51 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76 pruub=10.747629166s) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 125.979316711s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:51 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:51 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct 11 04:28:51 compute-0 ceph-mon[74243]: 7.12 scrub starts
Oct 11 04:28:51 compute-0 ceph-mon[74243]: 7.12 scrub ok
Oct 11 04:28:51 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 11 04:28:51 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 11 04:28:51 compute-0 ceph-mon[74243]: osdmap e76: 3 total, 3 up, 3 in
Oct 11 04:28:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct 11 04:28:51 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct 11 04:28:51 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:51 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:51 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:51 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:51 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:51 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:51 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:51 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:51 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Oct 11 04:28:51 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Oct 11 04:28:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v156: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 11 04:28:52 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct 11 04:28:52 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 11 04:28:52 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct 11 04:28:52 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 11 04:28:52 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Oct 11 04:28:52 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Oct 11 04:28:52 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct 11 04:28:52 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 11 04:28:52 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 11 04:28:52 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct 11 04:28:52 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct 11 04:28:52 compute-0 ceph-mon[74243]: osdmap e77: 3 total, 3 up, 3 in
Oct 11 04:28:52 compute-0 ceph-mon[74243]: pgmap v156: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 11 04:28:52 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 11 04:28:52 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 11 04:28:52 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:52 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:52 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 78 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78 pruub=12.302570343s) [1] r=-1 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 38'39 active pruub 134.439956665s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:52 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 78 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78 pruub=12.302479744s) [1] r=-1 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 134.439956665s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:52 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:53 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct 11 04:28:53 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct 11 04:28:53 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct 11 04:28:53 compute-0 ceph-mon[74243]: 10.4 scrub starts
Oct 11 04:28:53 compute-0 ceph-mon[74243]: 10.4 scrub ok
Oct 11 04:28:53 compute-0 ceph-mon[74243]: 5.1c scrub starts
Oct 11 04:28:53 compute-0 ceph-mon[74243]: 5.1c scrub ok
Oct 11 04:28:53 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 11 04:28:53 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 11 04:28:53 compute-0 ceph-mon[74243]: osdmap e78: 3 total, 3 up, 3 in
Oct 11 04:28:53 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79 pruub=14.988212585s) [2] async=[2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 132.756530762s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:53 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79 pruub=14.987908363s) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.756530762s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:53 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79 pruub=14.986744881s) [2] async=[2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 132.755737305s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:53 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79 pruub=14.986459732s) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.755737305s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:53 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[6.d( v 38'39 lc 34'13 (0'0,38'39] local-lis/les=78/79 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:53 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:53 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:53 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:53 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v159: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 11 04:28:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct 11 04:28:54 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 11 04:28:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct 11 04:28:54 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 11 04:28:54 compute-0 sshd-session[103665]: Accepted publickey for zuul from 192.168.122.30 port 54336 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:28:54 compute-0 systemd-logind[801]: New session 35 of user zuul.
Oct 11 04:28:54 compute-0 systemd[1]: Started Session 35 of User zuul.
Oct 11 04:28:54 compute-0 sshd-session[103665]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:28:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct 11 04:28:54 compute-0 ceph-mon[74243]: osdmap e79: 3 total, 3 up, 3 in
Oct 11 04:28:54 compute-0 ceph-mon[74243]: pgmap v159: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 11 04:28:54 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 11 04:28:54 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 11 04:28:54 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 11 04:28:54 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 11 04:28:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct 11 04:28:54 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct 11 04:28:54 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 80 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:54 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 80 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:55 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 11 04:28:55 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 11 04:28:55 compute-0 ceph-mon[74243]: osdmap e80: 3 total, 3 up, 3 in
Oct 11 04:28:55 compute-0 python3.9[103818]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:28:56
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['volumes', 'vms', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'images']
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct 11 04:28:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 11 04:28:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct 11 04:28:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 11 04:28:56 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Oct 11 04:28:56 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:28:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:28:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct 11 04:28:56 compute-0 ceph-mon[74243]: pgmap v161: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:56 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 11 04:28:56 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 11 04:28:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 11 04:28:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 11 04:28:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct 11 04:28:56 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct 11 04:28:56 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 81 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81 pruub=9.386934280s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 38'39 active pruub 135.256958008s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:28:56 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 81 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81 pruub=9.386527061s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 135.256958008s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:28:56 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:28:57 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.1c deep-scrub starts
Oct 11 04:28:57 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.1c deep-scrub ok
Oct 11 04:28:57 compute-0 sudo[104034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlspeziidhedmaiarpebpxlrscqdcmre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156936.9385893-32-100166471804948/AnsiballZ_command.py'
Oct 11 04:28:57 compute-0 sudo[104034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:28:57 compute-0 python3.9[104036]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:28:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct 11 04:28:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct 11 04:28:57 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct 11 04:28:57 compute-0 ceph-mon[74243]: 7.14 scrub starts
Oct 11 04:28:57 compute-0 ceph-mon[74243]: 7.14 scrub ok
Oct 11 04:28:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 11 04:28:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 11 04:28:57 compute-0 ceph-mon[74243]: osdmap e81: 3 total, 3 up, 3 in
Oct 11 04:28:57 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 82 pg[6.f( v 38'39 lc 34'1 (0'0,38'39] local-lis/les=81/82 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:28:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 177 B/s, 3 objects/s recovering
Oct 11 04:28:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Oct 11 04:28:58 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 11 04:28:58 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Oct 11 04:28:58 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Oct 11 04:28:58 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Oct 11 04:28:58 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Oct 11 04:28:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct 11 04:28:58 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 11 04:28:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct 11 04:28:58 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct 11 04:28:58 compute-0 ceph-mon[74243]: 2.1c deep-scrub starts
Oct 11 04:28:58 compute-0 ceph-mon[74243]: 2.1c deep-scrub ok
Oct 11 04:28:58 compute-0 ceph-mon[74243]: osdmap e82: 3 total, 3 up, 3 in
Oct 11 04:28:58 compute-0 ceph-mon[74243]: pgmap v164: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 177 B/s, 3 objects/s recovering
Oct 11 04:28:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 11 04:28:59 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Oct 11 04:28:59 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Oct 11 04:28:59 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Oct 11 04:28:59 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Oct 11 04:28:59 compute-0 ceph-mon[74243]: 7.16 scrub starts
Oct 11 04:28:59 compute-0 ceph-mon[74243]: 7.16 scrub ok
Oct 11 04:28:59 compute-0 ceph-mon[74243]: 5.1f scrub starts
Oct 11 04:28:59 compute-0 ceph-mon[74243]: 5.1f scrub ok
Oct 11 04:28:59 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 11 04:28:59 compute-0 ceph-mon[74243]: osdmap e83: 3 total, 3 up, 3 in
Oct 11 04:28:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v166: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 145 B/s, 3 objects/s recovering
Oct 11 04:29:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Oct 11 04:29:00 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 11 04:29:00 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Oct 11 04:29:00 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Oct 11 04:29:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct 11 04:29:00 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 11 04:29:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct 11 04:29:00 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct 11 04:29:00 compute-0 ceph-mon[74243]: 7.17 scrub starts
Oct 11 04:29:00 compute-0 ceph-mon[74243]: 7.17 scrub ok
Oct 11 04:29:00 compute-0 ceph-mon[74243]: 10.3 scrub starts
Oct 11 04:29:00 compute-0 ceph-mon[74243]: 10.3 scrub ok
Oct 11 04:29:00 compute-0 ceph-mon[74243]: pgmap v166: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 145 B/s, 3 objects/s recovering
Oct 11 04:29:00 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 11 04:29:01 compute-0 ceph-mon[74243]: 7.19 scrub starts
Oct 11 04:29:01 compute-0 ceph-mon[74243]: 7.19 scrub ok
Oct 11 04:29:01 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 11 04:29:01 compute-0 ceph-mon[74243]: osdmap e84: 3 total, 3 up, 3 in
Oct 11 04:29:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 133 B/s, 2 objects/s recovering
Oct 11 04:29:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Oct 11 04:29:02 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 11 04:29:02 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct 11 04:29:02 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct 11 04:29:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct 11 04:29:02 compute-0 ceph-mon[74243]: pgmap v168: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 133 B/s, 2 objects/s recovering
Oct 11 04:29:02 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 11 04:29:02 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 11 04:29:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct 11 04:29:02 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct 11 04:29:03 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct 11 04:29:03 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct 11 04:29:03 compute-0 ceph-mon[74243]: 10.5 scrub starts
Oct 11 04:29:03 compute-0 ceph-mon[74243]: 10.5 scrub ok
Oct 11 04:29:03 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 11 04:29:03 compute-0 ceph-mon[74243]: osdmap e85: 3 total, 3 up, 3 in
Oct 11 04:29:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v170: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Oct 11 04:29:04 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 11 04:29:04 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Oct 11 04:29:04 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Oct 11 04:29:04 compute-0 sudo[104034]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:04 compute-0 sshd-session[103668]: Connection closed by 192.168.122.30 port 54336
Oct 11 04:29:04 compute-0 sshd-session[103665]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:29:04 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Oct 11 04:29:04 compute-0 systemd[1]: session-35.scope: Consumed 8.447s CPU time.
Oct 11 04:29:04 compute-0 systemd-logind[801]: Session 35 logged out. Waiting for processes to exit.
Oct 11 04:29:04 compute-0 systemd-logind[801]: Removed session 35.
Oct 11 04:29:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct 11 04:29:04 compute-0 ceph-mon[74243]: 10.a scrub starts
Oct 11 04:29:04 compute-0 ceph-mon[74243]: 10.a scrub ok
Oct 11 04:29:04 compute-0 ceph-mon[74243]: pgmap v170: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:04 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 11 04:29:04 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 11 04:29:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct 11 04:29:04 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct 11 04:29:05 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Oct 11 04:29:05 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Oct 11 04:29:05 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct 11 04:29:05 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:29:05 compute-0 ceph-mon[74243]: 7.1d scrub starts
Oct 11 04:29:05 compute-0 ceph-mon[74243]: 7.1d scrub ok
Oct 11 04:29:05 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 11 04:29:05 compute-0 ceph-mon[74243]: osdmap e86: 3 total, 3 up, 3 in
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:29:06 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.4 deep-scrub starts
Oct 11 04:29:06 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.4 deep-scrub ok
Oct 11 04:29:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:06 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Oct 11 04:29:06 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 11 04:29:06 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct 11 04:29:06 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct 11 04:29:06 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 86 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86 pruub=15.660663605s) [2] r=-1 lpr=86 pi=[57,86)/1 crt=41'581 mlcod 0'0 active pruub 151.252334595s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:06 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 86 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86 pruub=15.660349846s) [2] r=-1 lpr=86 pi=[57,86)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 151.252334595s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:06 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:06 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct 11 04:29:06 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct 11 04:29:06 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct 11 04:29:06 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 11 04:29:06 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct 11 04:29:06 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct 11 04:29:06 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 87 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:06 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 87 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:06 compute-0 ceph-mon[74243]: 10.15 scrub starts
Oct 11 04:29:06 compute-0 ceph-mon[74243]: 10.15 scrub ok
Oct 11 04:29:06 compute-0 ceph-mon[74243]: 7.1e scrub starts
Oct 11 04:29:06 compute-0 ceph-mon[74243]: 7.1e scrub ok
Oct 11 04:29:06 compute-0 ceph-mon[74243]: pgmap v172: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:06 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 11 04:29:06 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:06 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:07 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Oct 11 04:29:07 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Oct 11 04:29:07 compute-0 sudo[104093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:07 compute-0 sudo[104093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:07 compute-0 sudo[104093]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:07 compute-0 sudo[104118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:29:07 compute-0 sudo[104118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:07 compute-0 sudo[104118]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:07 compute-0 sudo[104143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:07 compute-0 sudo[104143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:07 compute-0 sudo[104143]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:07 compute-0 sudo[104168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 04:29:07 compute-0 sudo[104168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct 11 04:29:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct 11 04:29:07 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct 11 04:29:07 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 88 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:29:07 compute-0 ceph-mon[74243]: 5.4 deep-scrub starts
Oct 11 04:29:07 compute-0 ceph-mon[74243]: 5.4 deep-scrub ok
Oct 11 04:29:07 compute-0 ceph-mon[74243]: 8.1 scrub starts
Oct 11 04:29:07 compute-0 ceph-mon[74243]: 8.1 scrub ok
Oct 11 04:29:07 compute-0 ceph-mon[74243]: 10.c scrub starts
Oct 11 04:29:07 compute-0 ceph-mon[74243]: 10.c scrub ok
Oct 11 04:29:07 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 11 04:29:07 compute-0 ceph-mon[74243]: osdmap e87: 3 total, 3 up, 3 in
Oct 11 04:29:07 compute-0 ceph-mon[74243]: osdmap e88: 3 total, 3 up, 3 in
Oct 11 04:29:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v175: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:08 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Oct 11 04:29:08 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 11 04:29:08 compute-0 podman[104263]: 2025-10-11 04:29:08.217757592 +0000 UTC m=+0.076799815 container exec 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:29:08 compute-0 podman[104263]: 2025-10-11 04:29:08.334692067 +0000 UTC m=+0.193734270 container exec_died 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:29:08 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct 11 04:29:08 compute-0 ceph-mon[74243]: 8.3 scrub starts
Oct 11 04:29:08 compute-0 ceph-mon[74243]: 8.3 scrub ok
Oct 11 04:29:08 compute-0 ceph-mon[74243]: pgmap v175: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:08 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 11 04:29:08 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 11 04:29:08 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct 11 04:29:08 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct 11 04:29:08 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89 pruub=14.967097282s) [2] async=[2] r=-1 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 41'581 active pruub 153.039077759s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:08 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89 pruub=14.966977119s) [2] r=-1 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 153.039077759s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:08 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89 pruub=12.173355103s) [1] r=-1 lpr=89 pi=[56,89)/1 crt=41'581 mlcod 0'0 active pruub 150.246643066s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:08 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89 pruub=12.173285484s) [1] r=-1 lpr=89 pi=[56,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 150.246643066s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:08 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:08 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:08 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:09 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Oct 11 04:29:09 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Oct 11 04:29:09 compute-0 sudo[104168]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:29:09 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:29:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:29:09 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:29:09 compute-0 sudo[104424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:09 compute-0 sudo[104424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:09 compute-0 sudo[104424]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:09 compute-0 sudo[104449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:29:09 compute-0 sudo[104449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:09 compute-0 sudo[104449]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:09 compute-0 sudo[104474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:09 compute-0 sudo[104474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:09 compute-0 sudo[104474]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:09 compute-0 sudo[104499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:29:09 compute-0 sudo[104499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct 11 04:29:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct 11 04:29:09 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct 11 04:29:09 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:09 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:09 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 90 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:09 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 90 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:09 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 90 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=89/90 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:29:09 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 11 04:29:09 compute-0 ceph-mon[74243]: osdmap e89: 3 total, 3 up, 3 in
Oct 11 04:29:09 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:29:09 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:29:09 compute-0 ceph-mon[74243]: osdmap e90: 3 total, 3 up, 3 in
Oct 11 04:29:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v178: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Oct 11 04:29:10 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 11 04:29:10 compute-0 sudo[104499]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:29:10 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:29:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:29:10 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:29:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:29:10 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:29:10 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev fc5db43b-2ddb-421f-aa4c-27c774911401 does not exist
Oct 11 04:29:10 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev df2ea26d-9953-458f-8f4c-d23edb7d5b94 does not exist
Oct 11 04:29:10 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev b1bde7c6-ca43-4615-982c-90078cd2e60a does not exist
Oct 11 04:29:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:29:10 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:29:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:29:10 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:29:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:29:10 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:29:10 compute-0 sudo[104555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:10 compute-0 sudo[104555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:10 compute-0 sudo[104555]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:10 compute-0 sudo[104580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:29:10 compute-0 sudo[104580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:10 compute-0 sudo[104580]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:10 compute-0 sudo[104605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:10 compute-0 sudo[104605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:10 compute-0 sudo[104605]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:10 compute-0 sudo[104630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:29:10 compute-0 sudo[104630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:10 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct 11 04:29:10 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct 11 04:29:10 compute-0 podman[104695]: 2025-10-11 04:29:10.763020307 +0000 UTC m=+0.060833088 container create ebb9c02702a79568f53c90f3556b7e4e6dadeffc0374ba1ed90124e7de646464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:29:10 compute-0 systemd[1]: Started libpod-conmon-ebb9c02702a79568f53c90f3556b7e4e6dadeffc0374ba1ed90124e7de646464.scope.
Oct 11 04:29:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct 11 04:29:10 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 11 04:29:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct 11 04:29:10 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct 11 04:29:10 compute-0 podman[104695]: 2025-10-11 04:29:10.736771142 +0000 UTC m=+0.034583973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:29:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:29:10 compute-0 podman[104695]: 2025-10-11 04:29:10.852905537 +0000 UTC m=+0.150718318 container init ebb9c02702a79568f53c90f3556b7e4e6dadeffc0374ba1ed90124e7de646464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:29:10 compute-0 podman[104695]: 2025-10-11 04:29:10.861684325 +0000 UTC m=+0.159497086 container start ebb9c02702a79568f53c90f3556b7e4e6dadeffc0374ba1ed90124e7de646464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meninsky, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:29:10 compute-0 podman[104695]: 2025-10-11 04:29:10.865145912 +0000 UTC m=+0.162958663 container attach ebb9c02702a79568f53c90f3556b7e4e6dadeffc0374ba1ed90124e7de646464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 04:29:10 compute-0 magical_meninsky[104711]: 167 167
Oct 11 04:29:10 compute-0 systemd[1]: libpod-ebb9c02702a79568f53c90f3556b7e4e6dadeffc0374ba1ed90124e7de646464.scope: Deactivated successfully.
Oct 11 04:29:10 compute-0 podman[104695]: 2025-10-11 04:29:10.866569277 +0000 UTC m=+0.164382038 container died ebb9c02702a79568f53c90f3556b7e4e6dadeffc0374ba1ed90124e7de646464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meninsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:29:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d0b82c73c9062e9f04e0faefdb36771414335c6aaedadadacdcc6c20cd57635-merged.mount: Deactivated successfully.
Oct 11 04:29:10 compute-0 podman[104695]: 2025-10-11 04:29:10.899408835 +0000 UTC m=+0.197221626 container remove ebb9c02702a79568f53c90f3556b7e4e6dadeffc0374ba1ed90124e7de646464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:29:10 compute-0 ceph-mon[74243]: 8.5 scrub starts
Oct 11 04:29:10 compute-0 ceph-mon[74243]: 8.5 scrub ok
Oct 11 04:29:10 compute-0 ceph-mon[74243]: pgmap v178: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:10 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 11 04:29:10 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:29:10 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:29:10 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:29:10 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:29:10 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:29:10 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:29:10 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 11 04:29:10 compute-0 ceph-mon[74243]: osdmap e91: 3 total, 3 up, 3 in
Oct 11 04:29:10 compute-0 systemd[1]: libpod-conmon-ebb9c02702a79568f53c90f3556b7e4e6dadeffc0374ba1ed90124e7de646464.scope: Deactivated successfully.
Oct 11 04:29:11 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 91 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91 pruub=9.350275040s) [0] r=-1 lpr=91 pi=[67,91)/1 crt=41'581 mlcod 0'0 active pruub 139.084411621s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:11 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 91 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91 pruub=9.349936485s) [0] r=-1 lpr=91 pi=[67,91)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 139.084411621s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:11 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=0 lpr=91 pi=[67,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:11 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.f scrub starts
Oct 11 04:29:11 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.f scrub ok
Oct 11 04:29:11 compute-0 podman[104735]: 2025-10-11 04:29:11.110839834 +0000 UTC m=+0.050121180 container create f57d6d4807086de35f322ac3b72188fa72dce9c0278d7f8a30a1678020010056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sutherland, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:29:11 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:29:11 compute-0 systemd[1]: Started libpod-conmon-f57d6d4807086de35f322ac3b72188fa72dce9c0278d7f8a30a1678020010056.scope.
Oct 11 04:29:11 compute-0 podman[104735]: 2025-10-11 04:29:11.091895782 +0000 UTC m=+0.031176998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:29:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12b30fc55c1c6b0ec70237453a864328c40e944263bdddc00f780d5abc9782f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12b30fc55c1c6b0ec70237453a864328c40e944263bdddc00f780d5abc9782f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12b30fc55c1c6b0ec70237453a864328c40e944263bdddc00f780d5abc9782f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12b30fc55c1c6b0ec70237453a864328c40e944263bdddc00f780d5abc9782f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12b30fc55c1c6b0ec70237453a864328c40e944263bdddc00f780d5abc9782f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:11 compute-0 podman[104735]: 2025-10-11 04:29:11.211761949 +0000 UTC m=+0.151043225 container init f57d6d4807086de35f322ac3b72188fa72dce9c0278d7f8a30a1678020010056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:29:11 compute-0 podman[104735]: 2025-10-11 04:29:11.224269461 +0000 UTC m=+0.163550657 container start f57d6d4807086de35f322ac3b72188fa72dce9c0278d7f8a30a1678020010056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 04:29:11 compute-0 podman[104735]: 2025-10-11 04:29:11.228488826 +0000 UTC m=+0.167770052 container attach f57d6d4807086de35f322ac3b72188fa72dce9c0278d7f8a30a1678020010056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:29:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct 11 04:29:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct 11 04:29:11 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct 11 04:29:11 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 92 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:11 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 92 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:11 compute-0 ceph-mon[74243]: 10.18 scrub starts
Oct 11 04:29:11 compute-0 ceph-mon[74243]: 10.18 scrub ok
Oct 11 04:29:11 compute-0 ceph-mon[74243]: 2.f scrub starts
Oct 11 04:29:11 compute-0 ceph-mon[74243]: 2.f scrub ok
Oct 11 04:29:11 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:11 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:11 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92 pruub=15.195237160s) [1] async=[1] r=-1 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 41'581 active pruub 156.293350220s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:11 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92 pruub=15.194863319s) [1] r=-1 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 156.293350220s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:11 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:11 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:12 compute-0 thirsty_sutherland[104752]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:29:12 compute-0 thirsty_sutherland[104752]: --> relative data size: 1.0
Oct 11 04:29:12 compute-0 thirsty_sutherland[104752]: --> All data devices are unavailable
Oct 11 04:29:12 compute-0 systemd[1]: libpod-f57d6d4807086de35f322ac3b72188fa72dce9c0278d7f8a30a1678020010056.scope: Deactivated successfully.
Oct 11 04:29:12 compute-0 systemd[1]: libpod-f57d6d4807086de35f322ac3b72188fa72dce9c0278d7f8a30a1678020010056.scope: Consumed 1.117s CPU time.
Oct 11 04:29:12 compute-0 podman[104735]: 2025-10-11 04:29:12.389148493 +0000 UTC m=+1.328429739 container died f57d6d4807086de35f322ac3b72188fa72dce9c0278d7f8a30a1678020010056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sutherland, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:29:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-e12b30fc55c1c6b0ec70237453a864328c40e944263bdddc00f780d5abc9782f-merged.mount: Deactivated successfully.
Oct 11 04:29:12 compute-0 podman[104735]: 2025-10-11 04:29:12.460665425 +0000 UTC m=+1.399946631 container remove f57d6d4807086de35f322ac3b72188fa72dce9c0278d7f8a30a1678020010056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:29:12 compute-0 systemd[1]: libpod-conmon-f57d6d4807086de35f322ac3b72188fa72dce9c0278d7f8a30a1678020010056.scope: Deactivated successfully.
Oct 11 04:29:12 compute-0 sudo[104630]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:12 compute-0 sudo[104791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:12 compute-0 sudo[104791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:12 compute-0 sudo[104791]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:12 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Oct 11 04:29:12 compute-0 sudo[104816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:29:12 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Oct 11 04:29:12 compute-0 sudo[104816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:12 compute-0 sudo[104816]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:12 compute-0 sudo[104841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:12 compute-0 sudo[104841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:12 compute-0 sudo[104841]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:12 compute-0 sudo[104866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:29:12 compute-0 sudo[104866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct 11 04:29:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct 11 04:29:12 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct 11 04:29:12 compute-0 ceph-mon[74243]: osdmap e92: 3 total, 3 up, 3 in
Oct 11 04:29:12 compute-0 ceph-mon[74243]: pgmap v181: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:12 compute-0 ceph-mon[74243]: osdmap e93: 3 total, 3 up, 3 in
Oct 11 04:29:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 93 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:29:13 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.7 deep-scrub starts
Oct 11 04:29:13 compute-0 podman[104932]: 2025-10-11 04:29:13.09643802 +0000 UTC m=+0.048151531 container create 6f0452c2d06823f77f0e90ee6a01eaf78eac11baf44e4045aa48222a2e0ce67e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 04:29:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 93 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:29:13 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.7 deep-scrub ok
Oct 11 04:29:13 compute-0 systemd[1]: Started libpod-conmon-6f0452c2d06823f77f0e90ee6a01eaf78eac11baf44e4045aa48222a2e0ce67e.scope.
Oct 11 04:29:13 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct 11 04:29:13 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct 11 04:29:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:29:13 compute-0 podman[104932]: 2025-10-11 04:29:13.073428026 +0000 UTC m=+0.025141527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:29:13 compute-0 podman[104932]: 2025-10-11 04:29:13.168125827 +0000 UTC m=+0.119839308 container init 6f0452c2d06823f77f0e90ee6a01eaf78eac11baf44e4045aa48222a2e0ce67e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:29:13 compute-0 podman[104932]: 2025-10-11 04:29:13.175792968 +0000 UTC m=+0.127506469 container start 6f0452c2d06823f77f0e90ee6a01eaf78eac11baf44e4045aa48222a2e0ce67e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:29:13 compute-0 podman[104932]: 2025-10-11 04:29:13.179837828 +0000 UTC m=+0.131551339 container attach 6f0452c2d06823f77f0e90ee6a01eaf78eac11baf44e4045aa48222a2e0ce67e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:29:13 compute-0 upbeat_northcutt[104948]: 167 167
Oct 11 04:29:13 compute-0 systemd[1]: libpod-6f0452c2d06823f77f0e90ee6a01eaf78eac11baf44e4045aa48222a2e0ce67e.scope: Deactivated successfully.
Oct 11 04:29:13 compute-0 conmon[104948]: conmon 6f0452c2d06823f77f0e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6f0452c2d06823f77f0e90ee6a01eaf78eac11baf44e4045aa48222a2e0ce67e.scope/container/memory.events
Oct 11 04:29:13 compute-0 podman[104932]: 2025-10-11 04:29:13.183246263 +0000 UTC m=+0.134959734 container died 6f0452c2d06823f77f0e90ee6a01eaf78eac11baf44e4045aa48222a2e0ce67e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:29:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-934ddde3ee2553afc710050a1f70d096ce11f9d0209a961be73cb16d47aef03e-merged.mount: Deactivated successfully.
Oct 11 04:29:13 compute-0 podman[104932]: 2025-10-11 04:29:13.225884386 +0000 UTC m=+0.177597887 container remove 6f0452c2d06823f77f0e90ee6a01eaf78eac11baf44e4045aa48222a2e0ce67e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:29:13 compute-0 systemd[1]: libpod-conmon-6f0452c2d06823f77f0e90ee6a01eaf78eac11baf44e4045aa48222a2e0ce67e.scope: Deactivated successfully.
Oct 11 04:29:13 compute-0 podman[104969]: 2025-10-11 04:29:13.430589258 +0000 UTC m=+0.071895303 container create d3537bebd7f3bcbef586219f9e858bdbca138a86726b30261bc95669e690e20f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:29:13 compute-0 systemd[1]: Started libpod-conmon-d3537bebd7f3bcbef586219f9e858bdbca138a86726b30261bc95669e690e20f.scope.
Oct 11 04:29:13 compute-0 podman[104969]: 2025-10-11 04:29:13.40137954 +0000 UTC m=+0.042685615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:29:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97372f66668a5d8f53cf6480c44ec534f780a185cf3e5ddb4874ee6a1cdd6f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97372f66668a5d8f53cf6480c44ec534f780a185cf3e5ddb4874ee6a1cdd6f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97372f66668a5d8f53cf6480c44ec534f780a185cf3e5ddb4874ee6a1cdd6f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97372f66668a5d8f53cf6480c44ec534f780a185cf3e5ddb4874ee6a1cdd6f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:13 compute-0 podman[104969]: 2025-10-11 04:29:13.546295261 +0000 UTC m=+0.187601337 container init d3537bebd7f3bcbef586219f9e858bdbca138a86726b30261bc95669e690e20f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_meninsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:29:13 compute-0 podman[104969]: 2025-10-11 04:29:13.558156377 +0000 UTC m=+0.199462372 container start d3537bebd7f3bcbef586219f9e858bdbca138a86726b30261bc95669e690e20f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:29:13 compute-0 podman[104969]: 2025-10-11 04:29:13.5614709 +0000 UTC m=+0.202776935 container attach d3537bebd7f3bcbef586219f9e858bdbca138a86726b30261bc95669e690e20f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_meninsky, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:29:13 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Oct 11 04:29:13 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Oct 11 04:29:13 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct 11 04:29:13 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct 11 04:29:13 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct 11 04:29:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:13 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94 pruub=15.118905067s) [0] async=[0] r=-1 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 41'581 active pruub 147.797927856s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:13 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94 pruub=15.118811607s) [0] r=-1 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 147.797927856s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:13 compute-0 ceph-mon[74243]: 10.1b scrub starts
Oct 11 04:29:13 compute-0 ceph-mon[74243]: 10.1b scrub ok
Oct 11 04:29:13 compute-0 ceph-mon[74243]: 10.7 deep-scrub starts
Oct 11 04:29:13 compute-0 ceph-mon[74243]: 10.7 deep-scrub ok
Oct 11 04:29:13 compute-0 ceph-mon[74243]: 8.7 scrub starts
Oct 11 04:29:13 compute-0 ceph-mon[74243]: 8.7 scrub ok
Oct 11 04:29:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v184: 305 pgs: 1 activating+remapped, 1 active+remapped, 303 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 6/244 objects misplaced (2.459%); 54 B/s, 2 objects/s recovering
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]: {
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:     "0": [
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:         {
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "devices": [
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "/dev/loop3"
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             ],
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "lv_name": "ceph_lv0",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "lv_size": "21470642176",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "name": "ceph_lv0",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "tags": {
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.cluster_name": "ceph",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.crush_device_class": "",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.encrypted": "0",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.osd_id": "0",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.type": "block",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.vdo": "0"
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             },
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "type": "block",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "vg_name": "ceph_vg0"
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:         }
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:     ],
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:     "1": [
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:         {
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "devices": [
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "/dev/loop4"
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             ],
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "lv_name": "ceph_lv1",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "lv_size": "21470642176",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "name": "ceph_lv1",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "tags": {
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.cluster_name": "ceph",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.crush_device_class": "",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.encrypted": "0",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.osd_id": "1",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.type": "block",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.vdo": "0"
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             },
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "type": "block",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "vg_name": "ceph_vg1"
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:         }
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:     ],
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:     "2": [
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:         {
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "devices": [
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "/dev/loop5"
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             ],
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "lv_name": "ceph_lv2",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "lv_size": "21470642176",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "name": "ceph_lv2",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "tags": {
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.cluster_name": "ceph",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.crush_device_class": "",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.encrypted": "0",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.osd_id": "2",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.type": "block",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:                 "ceph.vdo": "0"
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             },
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "type": "block",
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:             "vg_name": "ceph_vg2"
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:         }
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]:     ]
Oct 11 04:29:14 compute-0 frosty_meninsky[104985]: }
Oct 11 04:29:14 compute-0 systemd[1]: libpod-d3537bebd7f3bcbef586219f9e858bdbca138a86726b30261bc95669e690e20f.scope: Deactivated successfully.
Oct 11 04:29:14 compute-0 podman[104969]: 2025-10-11 04:29:14.330978268 +0000 UTC m=+0.972284263 container died d3537bebd7f3bcbef586219f9e858bdbca138a86726b30261bc95669e690e20f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_meninsky, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:29:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c97372f66668a5d8f53cf6480c44ec534f780a185cf3e5ddb4874ee6a1cdd6f1-merged.mount: Deactivated successfully.
Oct 11 04:29:14 compute-0 podman[104969]: 2025-10-11 04:29:14.400258264 +0000 UTC m=+1.041564299 container remove d3537bebd7f3bcbef586219f9e858bdbca138a86726b30261bc95669e690e20f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:29:14 compute-0 systemd[1]: libpod-conmon-d3537bebd7f3bcbef586219f9e858bdbca138a86726b30261bc95669e690e20f.scope: Deactivated successfully.
Oct 11 04:29:14 compute-0 sudo[104866]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:14 compute-0 sudo[105008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:14 compute-0 sudo[105008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:14 compute-0 sudo[105008]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:14 compute-0 sudo[105033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:29:14 compute-0 sudo[105033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:14 compute-0 sudo[105033]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:14 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct 11 04:29:14 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct 11 04:29:14 compute-0 sudo[105058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:14 compute-0 sudo[105058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:14 compute-0 sudo[105058]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:14 compute-0 sudo[105083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:29:14 compute-0 sudo[105083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct 11 04:29:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct 11 04:29:14 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct 11 04:29:15 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 95 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=94/95 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:29:15 compute-0 ceph-mon[74243]: 10.1c scrub starts
Oct 11 04:29:15 compute-0 ceph-mon[74243]: 10.1c scrub ok
Oct 11 04:29:15 compute-0 ceph-mon[74243]: osdmap e94: 3 total, 3 up, 3 in
Oct 11 04:29:15 compute-0 ceph-mon[74243]: pgmap v184: 305 pgs: 1 activating+remapped, 1 active+remapped, 303 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 6/244 objects misplaced (2.459%); 54 B/s, 2 objects/s recovering
Oct 11 04:29:15 compute-0 ceph-mon[74243]: osdmap e95: 3 total, 3 up, 3 in
Oct 11 04:29:15 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Oct 11 04:29:15 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Oct 11 04:29:15 compute-0 podman[105148]: 2025-10-11 04:29:15.194518868 +0000 UTC m=+0.064102588 container create b0c4ea2c05ad7089db9a6e8ddfedba2320734bf45df9cb2b5500270177b1b21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kepler, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:29:15 compute-0 systemd[1]: Started libpod-conmon-b0c4ea2c05ad7089db9a6e8ddfedba2320734bf45df9cb2b5500270177b1b21c.scope.
Oct 11 04:29:15 compute-0 podman[105148]: 2025-10-11 04:29:15.170563011 +0000 UTC m=+0.040146781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:29:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:29:15 compute-0 podman[105148]: 2025-10-11 04:29:15.284426349 +0000 UTC m=+0.154010069 container init b0c4ea2c05ad7089db9a6e8ddfedba2320734bf45df9cb2b5500270177b1b21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kepler, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:29:15 compute-0 podman[105148]: 2025-10-11 04:29:15.29570811 +0000 UTC m=+0.165291820 container start b0c4ea2c05ad7089db9a6e8ddfedba2320734bf45df9cb2b5500270177b1b21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kepler, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:29:15 compute-0 podman[105148]: 2025-10-11 04:29:15.299870964 +0000 UTC m=+0.169454724 container attach b0c4ea2c05ad7089db9a6e8ddfedba2320734bf45df9cb2b5500270177b1b21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kepler, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:29:15 compute-0 goofy_kepler[105164]: 167 167
Oct 11 04:29:15 compute-0 systemd[1]: libpod-b0c4ea2c05ad7089db9a6e8ddfedba2320734bf45df9cb2b5500270177b1b21c.scope: Deactivated successfully.
Oct 11 04:29:15 compute-0 conmon[105164]: conmon b0c4ea2c05ad7089db9a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b0c4ea2c05ad7089db9a6e8ddfedba2320734bf45df9cb2b5500270177b1b21c.scope/container/memory.events
Oct 11 04:29:15 compute-0 podman[105148]: 2025-10-11 04:29:15.303419552 +0000 UTC m=+0.173003272 container died b0c4ea2c05ad7089db9a6e8ddfedba2320734bf45df9cb2b5500270177b1b21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kepler, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:29:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf8244afb02288e1a9ab126bbf69fca00f9471c5d79a7c62f798f43b40025e16-merged.mount: Deactivated successfully.
Oct 11 04:29:15 compute-0 podman[105148]: 2025-10-11 04:29:15.358431673 +0000 UTC m=+0.228015383 container remove b0c4ea2c05ad7089db9a6e8ddfedba2320734bf45df9cb2b5500270177b1b21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kepler, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:29:15 compute-0 systemd[1]: libpod-conmon-b0c4ea2c05ad7089db9a6e8ddfedba2320734bf45df9cb2b5500270177b1b21c.scope: Deactivated successfully.
Oct 11 04:29:15 compute-0 podman[105188]: 2025-10-11 04:29:15.570662653 +0000 UTC m=+0.048780867 container create 0aad60d45a236424d94f65d6b643e01515e23f3fef381548797d826df35967fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_roentgen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:29:15 compute-0 systemd[1]: Started libpod-conmon-0aad60d45a236424d94f65d6b643e01515e23f3fef381548797d826df35967fc.scope.
Oct 11 04:29:15 compute-0 podman[105188]: 2025-10-11 04:29:15.54488913 +0000 UTC m=+0.023007384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:29:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:29:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8013a8597aaa26aa6ce58524534a3bcdb27825c5456afee22c7912c5584d2ef0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8013a8597aaa26aa6ce58524534a3bcdb27825c5456afee22c7912c5584d2ef0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8013a8597aaa26aa6ce58524534a3bcdb27825c5456afee22c7912c5584d2ef0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8013a8597aaa26aa6ce58524534a3bcdb27825c5456afee22c7912c5584d2ef0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:15 compute-0 podman[105188]: 2025-10-11 04:29:15.672592653 +0000 UTC m=+0.150710837 container init 0aad60d45a236424d94f65d6b643e01515e23f3fef381548797d826df35967fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_roentgen, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:29:15 compute-0 podman[105188]: 2025-10-11 04:29:15.684165191 +0000 UTC m=+0.162283405 container start 0aad60d45a236424d94f65d6b643e01515e23f3fef381548797d826df35967fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_roentgen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:29:15 compute-0 podman[105188]: 2025-10-11 04:29:15.688484519 +0000 UTC m=+0.166602723 container attach 0aad60d45a236424d94f65d6b643e01515e23f3fef381548797d826df35967fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_roentgen, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:29:16 compute-0 ceph-mon[74243]: 10.1d scrub starts
Oct 11 04:29:16 compute-0 ceph-mon[74243]: 10.1d scrub ok
Oct 11 04:29:16 compute-0 ceph-mon[74243]: 8.8 scrub starts
Oct 11 04:29:16 compute-0 ceph-mon[74243]: 8.8 scrub ok
Oct 11 04:29:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 1 activating+remapped, 1 active+remapped, 303 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 247 B/s wr, 11 op/s; 6/244 objects misplaced (2.459%); 53 B/s, 2 objects/s recovering
Oct 11 04:29:16 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.a scrub starts
Oct 11 04:29:16 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.a scrub ok
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]: {
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:         "osd_id": 1,
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:         "type": "bluestore"
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:     },
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:         "osd_id": 0,
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:         "type": "bluestore"
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:     },
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:         "osd_id": 2,
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:         "type": "bluestore"
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]:     }
Oct 11 04:29:16 compute-0 awesome_roentgen[105204]: }
Oct 11 04:29:16 compute-0 systemd[1]: libpod-0aad60d45a236424d94f65d6b643e01515e23f3fef381548797d826df35967fc.scope: Deactivated successfully.
Oct 11 04:29:16 compute-0 systemd[1]: libpod-0aad60d45a236424d94f65d6b643e01515e23f3fef381548797d826df35967fc.scope: Consumed 1.034s CPU time.
Oct 11 04:29:16 compute-0 podman[105237]: 2025-10-11 04:29:16.756263321 +0000 UTC m=+0.033009874 container died 0aad60d45a236424d94f65d6b643e01515e23f3fef381548797d826df35967fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_roentgen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:29:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-8013a8597aaa26aa6ce58524534a3bcdb27825c5456afee22c7912c5584d2ef0-merged.mount: Deactivated successfully.
Oct 11 04:29:16 compute-0 podman[105237]: 2025-10-11 04:29:16.840318475 +0000 UTC m=+0.117064988 container remove 0aad60d45a236424d94f65d6b643e01515e23f3fef381548797d826df35967fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:29:16 compute-0 systemd[1]: libpod-conmon-0aad60d45a236424d94f65d6b643e01515e23f3fef381548797d826df35967fc.scope: Deactivated successfully.
Oct 11 04:29:16 compute-0 sudo[105083]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:29:16 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:29:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:29:16 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:29:16 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 756f4827-7d4d-41aa-8868-916b4f03a05d does not exist
Oct 11 04:29:16 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 65b22dda-912a-4523-bde9-e7fa48731420 does not exist
Oct 11 04:29:17 compute-0 sudo[105252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:17 compute-0 ceph-mon[74243]: pgmap v186: 305 pgs: 1 activating+remapped, 1 active+remapped, 303 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 247 B/s wr, 11 op/s; 6/244 objects misplaced (2.459%); 53 B/s, 2 objects/s recovering
Oct 11 04:29:17 compute-0 ceph-mon[74243]: 8.a scrub starts
Oct 11 04:29:17 compute-0 sudo[105252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:17 compute-0 ceph-mon[74243]: 8.a scrub ok
Oct 11 04:29:17 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:29:17 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:29:17 compute-0 sudo[105252]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:17 compute-0 sudo[105277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:29:17 compute-0 sudo[105277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:17 compute-0 sudo[105277]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:17 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Oct 11 04:29:17 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Oct 11 04:29:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v187: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 170 B/s wr, 8 op/s; 54 B/s, 2 objects/s recovering
Oct 11 04:29:18 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Oct 11 04:29:18 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 11 04:29:18 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct 11 04:29:18 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct 11 04:29:18 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct 11 04:29:18 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 11 04:29:18 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct 11 04:29:18 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 11 04:29:18 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct 11 04:29:19 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Oct 11 04:29:19 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Oct 11 04:29:19 compute-0 ceph-mon[74243]: 10.1f scrub starts
Oct 11 04:29:19 compute-0 ceph-mon[74243]: 10.1f scrub ok
Oct 11 04:29:19 compute-0 ceph-mon[74243]: pgmap v187: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 170 B/s wr, 8 op/s; 54 B/s, 2 objects/s recovering
Oct 11 04:29:19 compute-0 ceph-mon[74243]: 5.5 scrub starts
Oct 11 04:29:19 compute-0 ceph-mon[74243]: 5.5 scrub ok
Oct 11 04:29:19 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 11 04:29:19 compute-0 ceph-mon[74243]: osdmap e96: 3 total, 3 up, 3 in
Oct 11 04:29:19 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Oct 11 04:29:19 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Oct 11 04:29:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct 11 04:29:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Oct 11 04:29:20 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 11 04:29:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct 11 04:29:20 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 11 04:29:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct 11 04:29:20 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct 11 04:29:20 compute-0 ceph-mon[74243]: 8.13 scrub starts
Oct 11 04:29:20 compute-0 ceph-mon[74243]: 8.13 scrub ok
Oct 11 04:29:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 11 04:29:20 compute-0 sshd-session[105302]: Accepted publickey for zuul from 192.168.122.30 port 36992 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:29:20 compute-0 systemd-logind[801]: New session 36 of user zuul.
Oct 11 04:29:20 compute-0 systemd[1]: Started Session 36 of User zuul.
Oct 11 04:29:20 compute-0 sshd-session[105302]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:29:20 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.13 deep-scrub starts
Oct 11 04:29:20 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.13 deep-scrub ok
Oct 11 04:29:21 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Oct 11 04:29:21 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Oct 11 04:29:21 compute-0 ceph-mon[74243]: 4.18 scrub starts
Oct 11 04:29:21 compute-0 ceph-mon[74243]: 4.18 scrub ok
Oct 11 04:29:21 compute-0 ceph-mon[74243]: pgmap v189: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct 11 04:29:21 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 11 04:29:21 compute-0 ceph-mon[74243]: osdmap e97: 3 total, 3 up, 3 in
Oct 11 04:29:21 compute-0 python3.9[105455]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 11 04:29:21 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct 11 04:29:21 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct 11 04:29:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Oct 11 04:29:22 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Oct 11 04:29:22 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 11 04:29:22 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct 11 04:29:22 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 11 04:29:22 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct 11 04:29:22 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Oct 11 04:29:22 compute-0 ceph-mon[74243]: 4.13 deep-scrub starts
Oct 11 04:29:22 compute-0 ceph-mon[74243]: 4.13 deep-scrub ok
Oct 11 04:29:22 compute-0 ceph-mon[74243]: 2.1f scrub starts
Oct 11 04:29:22 compute-0 ceph-mon[74243]: 2.1f scrub ok
Oct 11 04:29:22 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 11 04:29:22 compute-0 python3.9[105629]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:29:23 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 98 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98 pruub=15.018722534s) [2] r=-1 lpr=98 pi=[57,98)/1 crt=41'581 mlcod 0'0 active pruub 167.257934570s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:23 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 98 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98 pruub=15.018464088s) [2] r=-1 lpr=98 pi=[57,98)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 167.257934570s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:23 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:23 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct 11 04:29:23 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct 11 04:29:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Oct 11 04:29:23 compute-0 ceph-mon[74243]: 4.11 scrub starts
Oct 11 04:29:23 compute-0 ceph-mon[74243]: 4.11 scrub ok
Oct 11 04:29:23 compute-0 ceph-mon[74243]: pgmap v191: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Oct 11 04:29:23 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 11 04:29:23 compute-0 ceph-mon[74243]: osdmap e98: 3 total, 3 up, 3 in
Oct 11 04:29:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Oct 11 04:29:23 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Oct 11 04:29:23 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 99 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:23 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 99 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:23 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:23 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:23 compute-0 sudo[105783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gygfeyjcytcreqmacmozlmehopmhtutj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156963.006052-45-54996915428455/AnsiballZ_command.py'
Oct 11 04:29:23 compute-0 sudo[105783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:29:23 compute-0 python3.9[105785]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:29:23 compute-0 sudo[105783]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v194: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Oct 11 04:29:24 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 11 04:29:24 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.17 deep-scrub starts
Oct 11 04:29:24 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.17 deep-scrub ok
Oct 11 04:29:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Oct 11 04:29:24 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 11 04:29:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Oct 11 04:29:24 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Oct 11 04:29:24 compute-0 ceph-mon[74243]: 5.2 scrub starts
Oct 11 04:29:24 compute-0 ceph-mon[74243]: 5.2 scrub ok
Oct 11 04:29:24 compute-0 ceph-mon[74243]: osdmap e99: 3 total, 3 up, 3 in
Oct 11 04:29:24 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 11 04:29:24 compute-0 sudo[105936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkhcsfhqbojyhzzxtroyuxsmjuqrnrby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156964.0273383-57-81324610533693/AnsiballZ_stat.py'
Oct 11 04:29:24 compute-0 sudo[105936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:29:24 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 100 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:29:24 compute-0 python3.9[105938]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:29:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Oct 11 04:29:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Oct 11 04:29:24 compute-0 sudo[105936]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:24 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Oct 11 04:29:24 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:24 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:24 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101 pruub=15.772900581s) [2] async=[2] r=-1 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 41'581 active pruub 169.743255615s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:24 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101 pruub=15.772796631s) [2] r=-1 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 169.743255615s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:25 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Oct 11 04:29:25 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Oct 11 04:29:25 compute-0 ceph-mon[74243]: pgmap v194: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:25 compute-0 ceph-mon[74243]: 10.17 deep-scrub starts
Oct 11 04:29:25 compute-0 ceph-mon[74243]: 10.17 deep-scrub ok
Oct 11 04:29:25 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 11 04:29:25 compute-0 ceph-mon[74243]: osdmap e100: 3 total, 3 up, 3 in
Oct 11 04:29:25 compute-0 ceph-mon[74243]: osdmap e101: 3 total, 3 up, 3 in
Oct 11 04:29:25 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Oct 11 04:29:25 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Oct 11 04:29:25 compute-0 sudo[106090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxfetnnbammbdypqhejmzaqaqalvfnjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156965.1358683-68-224601452751598/AnsiballZ_file.py'
Oct 11 04:29:25 compute-0 sudo[106090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:29:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Oct 11 04:29:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Oct 11 04:29:25 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Oct 11 04:29:25 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 102 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=101/102 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:29:25 compute-0 python3.9[106092]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:29:25 compute-0 sudo[106090]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Oct 11 04:29:26 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 11 04:29:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:29:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:29:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:29:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:29:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:29:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:29:26 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.d scrub starts
Oct 11 04:29:26 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.d scrub ok
Oct 11 04:29:26 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.a scrub starts
Oct 11 04:29:26 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.a scrub ok
Oct 11 04:29:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Oct 11 04:29:26 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 11 04:29:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Oct 11 04:29:26 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Oct 11 04:29:26 compute-0 ceph-mon[74243]: 9.2 scrub starts
Oct 11 04:29:26 compute-0 ceph-mon[74243]: 9.2 scrub ok
Oct 11 04:29:26 compute-0 ceph-mon[74243]: 4.1b scrub starts
Oct 11 04:29:26 compute-0 ceph-mon[74243]: 4.1b scrub ok
Oct 11 04:29:26 compute-0 ceph-mon[74243]: osdmap e102: 3 total, 3 up, 3 in
Oct 11 04:29:26 compute-0 ceph-mon[74243]: pgmap v198: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:26 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 11 04:29:26 compute-0 python3.9[106242]: ansible-ansible.builtin.service_facts Invoked
Oct 11 04:29:26 compute-0 network[106259]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 04:29:26 compute-0 network[106260]: 'network-scripts' will be removed from distribution in near future.
Oct 11 04:29:26 compute-0 network[106261]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 04:29:27 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct 11 04:29:27 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct 11 04:29:27 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct 11 04:29:27 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct 11 04:29:27 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.1c deep-scrub starts
Oct 11 04:29:27 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.1c deep-scrub ok
Oct 11 04:29:27 compute-0 ceph-mon[74243]: 10.d scrub starts
Oct 11 04:29:27 compute-0 ceph-mon[74243]: 10.d scrub ok
Oct 11 04:29:27 compute-0 ceph-mon[74243]: 4.a scrub starts
Oct 11 04:29:27 compute-0 ceph-mon[74243]: 4.a scrub ok
Oct 11 04:29:27 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 11 04:29:27 compute-0 ceph-mon[74243]: osdmap e103: 3 total, 3 up, 3 in
Oct 11 04:29:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v200: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Oct 11 04:29:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Oct 11 04:29:28 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 11 04:29:28 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.b scrub starts
Oct 11 04:29:28 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Oct 11 04:29:28 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.b scrub ok
Oct 11 04:29:28 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Oct 11 04:29:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Oct 11 04:29:28 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 11 04:29:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Oct 11 04:29:28 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Oct 11 04:29:28 compute-0 ceph-mon[74243]: 8.16 scrub starts
Oct 11 04:29:28 compute-0 ceph-mon[74243]: 5.3 scrub starts
Oct 11 04:29:28 compute-0 ceph-mon[74243]: 8.16 scrub ok
Oct 11 04:29:28 compute-0 ceph-mon[74243]: 5.3 scrub ok
Oct 11 04:29:28 compute-0 ceph-mon[74243]: 4.1c deep-scrub starts
Oct 11 04:29:28 compute-0 ceph-mon[74243]: 4.1c deep-scrub ok
Oct 11 04:29:28 compute-0 ceph-mon[74243]: pgmap v200: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Oct 11 04:29:28 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 11 04:29:29 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Oct 11 04:29:29 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Oct 11 04:29:29 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 104 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104 pruub=12.967244148s) [0] r=-1 lpr=104 pi=[79,104)/1 crt=41'581 mlcod 0'0 active pruub 161.408264160s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:29 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 104 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104 pruub=12.966794014s) [0] r=-1 lpr=104 pi=[79,104)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 161.408264160s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:29 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=0 lpr=104 pi=[79,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Oct 11 04:29:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Oct 11 04:29:29 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Oct 11 04:29:29 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:29 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:29 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 105 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:29 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 105 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:29 compute-0 ceph-mon[74243]: 2.b scrub starts
Oct 11 04:29:29 compute-0 ceph-mon[74243]: 8.17 scrub starts
Oct 11 04:29:29 compute-0 ceph-mon[74243]: 2.b scrub ok
Oct 11 04:29:29 compute-0 ceph-mon[74243]: 8.17 scrub ok
Oct 11 04:29:29 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 11 04:29:29 compute-0 ceph-mon[74243]: osdmap e104: 3 total, 3 up, 3 in
Oct 11 04:29:29 compute-0 ceph-mon[74243]: osdmap e105: 3 total, 3 up, 3 in
Oct 11 04:29:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v203: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 2 objects/s recovering
Oct 11 04:29:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Oct 11 04:29:30 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 11 04:29:30 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Oct 11 04:29:30 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Oct 11 04:29:30 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.e scrub starts
Oct 11 04:29:30 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.e scrub ok
Oct 11 04:29:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Oct 11 04:29:30 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 11 04:29:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Oct 11 04:29:30 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Oct 11 04:29:30 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 106 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:29:30 compute-0 ceph-mon[74243]: 4.1a scrub starts
Oct 11 04:29:30 compute-0 ceph-mon[74243]: 4.1a scrub ok
Oct 11 04:29:30 compute-0 ceph-mon[74243]: pgmap v203: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 2 objects/s recovering
Oct 11 04:29:30 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 11 04:29:30 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 11 04:29:30 compute-0 ceph-mon[74243]: osdmap e106: 3 total, 3 up, 3 in
Oct 11 04:29:31 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Oct 11 04:29:31 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Oct 11 04:29:31 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.1a deep-scrub starts
Oct 11 04:29:31 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.1a deep-scrub ok
Oct 11 04:29:31 compute-0 python3.9[106523]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Oct 11 04:29:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Oct 11 04:29:31 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Oct 11 04:29:31 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:31 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107 pruub=14.941355705s) [0] async=[0] r=-1 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 41'581 active pruub 165.531799316s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:31 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107 pruub=14.941193581s) [0] r=-1 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 165.531799316s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:31 compute-0 ceph-mon[74243]: 8.19 scrub starts
Oct 11 04:29:31 compute-0 ceph-mon[74243]: 8.19 scrub ok
Oct 11 04:29:31 compute-0 ceph-mon[74243]: 4.e scrub starts
Oct 11 04:29:31 compute-0 ceph-mon[74243]: 4.e scrub ok
Oct 11 04:29:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:32 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Oct 11 04:29:32 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 11 04:29:32 compute-0 python3.9[106673]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:29:32 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Oct 11 04:29:32 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 11 04:29:32 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Oct 11 04:29:32 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Oct 11 04:29:32 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108 pruub=11.496529579s) [0] r=-1 lpr=108 pi=[67,108)/1 crt=41'581 mlcod 0'0 active pruub 163.090164185s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:32 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108 pruub=11.496438026s) [0] r=-1 lpr=108 pi=[67,108)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 163.090164185s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:32 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=0 lpr=108 pi=[67,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:32 compute-0 ceph-mon[74243]: 2.8 scrub starts
Oct 11 04:29:32 compute-0 ceph-mon[74243]: 2.8 scrub ok
Oct 11 04:29:32 compute-0 ceph-mon[74243]: 7.1a deep-scrub starts
Oct 11 04:29:32 compute-0 ceph-mon[74243]: 7.1a deep-scrub ok
Oct 11 04:29:32 compute-0 ceph-mon[74243]: osdmap e107: 3 total, 3 up, 3 in
Oct 11 04:29:32 compute-0 ceph-mon[74243]: pgmap v206: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:32 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 11 04:29:32 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 11 04:29:32 compute-0 ceph-mon[74243]: osdmap e108: 3 total, 3 up, 3 in
Oct 11 04:29:32 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=107/108 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:29:33 compute-0 python3.9[106827]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:29:33 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Oct 11 04:29:33 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Oct 11 04:29:33 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Oct 11 04:29:33 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:33 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:33 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 109 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:33 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 109 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 04:29:34 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:29:34 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Oct 11 04:29:34 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Oct 11 04:29:34 compute-0 sudo[106983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrvsxplzzndtfgywxzoeuizoctsuciat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156974.2178433-116-153747615474869/AnsiballZ_setup.py'
Oct 11 04:29:34 compute-0 sudo[106983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:29:34 compute-0 python3.9[106985]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:29:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Oct 11 04:29:34 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:29:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Oct 11 04:29:34 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Oct 11 04:29:34 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110 pruub=9.466191292s) [1] r=-1 lpr=110 pi=[67,110)/1 crt=41'581 mlcod 0'0 active pruub 163.090164185s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:34 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110 pruub=9.466117859s) [1] r=-1 lpr=110 pi=[67,110)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 163.090164185s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:34 compute-0 ceph-mon[74243]: osdmap e109: 3 total, 3 up, 3 in
Oct 11 04:29:34 compute-0 ceph-mon[74243]: pgmap v209: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:34 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:29:34 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:35 compute-0 sudo[106983]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:35 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:29:35 compute-0 sudo[107067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-houmqkcbdwhxpraenobmxfgfxtqdnwcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760156974.2178433-116-153747615474869/AnsiballZ_dnf.py'
Oct 11 04:29:35 compute-0 sudo[107067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:29:35 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Oct 11 04:29:35 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Oct 11 04:29:35 compute-0 python3.9[107069]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:29:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Oct 11 04:29:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Oct 11 04:29:35 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Oct 11 04:29:35 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:35 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:35 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111 pruub=15.246043205s) [0] async=[0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 41'581 active pruub 169.898193359s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:35 compute-0 ceph-mon[74243]: 8.1e scrub starts
Oct 11 04:29:35 compute-0 ceph-mon[74243]: 8.1e scrub ok
Oct 11 04:29:35 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:29:35 compute-0 ceph-mon[74243]: osdmap e110: 3 total, 3 up, 3 in
Oct 11 04:29:35 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:35 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111 pruub=15.245329857s) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 169.898193359s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:35 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:35 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:35 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v212: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:36 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct 11 04:29:36 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct 11 04:29:36 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Oct 11 04:29:36 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Oct 11 04:29:36 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Oct 11 04:29:36 compute-0 ceph-mon[74243]: 3.1d scrub starts
Oct 11 04:29:36 compute-0 ceph-mon[74243]: 3.1d scrub ok
Oct 11 04:29:36 compute-0 ceph-mon[74243]: osdmap e111: 3 total, 3 up, 3 in
Oct 11 04:29:36 compute-0 ceph-mon[74243]: pgmap v212: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:36 compute-0 ceph-mon[74243]: osdmap e112: 3 total, 3 up, 3 in
Oct 11 04:29:36 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:29:36 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:29:37 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct 11 04:29:37 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct 11 04:29:37 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Oct 11 04:29:37 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Oct 11 04:29:37 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.15 deep-scrub starts
Oct 11 04:29:37 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.15 deep-scrub ok
Oct 11 04:29:37 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Oct 11 04:29:37 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Oct 11 04:29:37 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Oct 11 04:29:37 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113 pruub=14.973883629s) [1] async=[1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 41'581 active pruub 171.660278320s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:37 compute-0 ceph-mon[74243]: 3.1e scrub starts
Oct 11 04:29:37 compute-0 ceph-mon[74243]: 3.1e scrub ok
Oct 11 04:29:37 compute-0 ceph-mon[74243]: 9.4 scrub starts
Oct 11 04:29:37 compute-0 ceph-mon[74243]: 9.4 scrub ok
Oct 11 04:29:37 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113 pruub=14.973669052s) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 171.660278320s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:29:37 compute-0 ceph-mon[74243]: 10.1 scrub starts
Oct 11 04:29:37 compute-0 ceph-mon[74243]: 10.1 scrub ok
Oct 11 04:29:37 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:29:37 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:29:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 3 objects/s recovering
Oct 11 04:29:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Oct 11 04:29:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Oct 11 04:29:38 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Oct 11 04:29:39 compute-0 ceph-mon[74243]: 8.15 deep-scrub starts
Oct 11 04:29:39 compute-0 ceph-mon[74243]: 8.15 deep-scrub ok
Oct 11 04:29:39 compute-0 ceph-mon[74243]: osdmap e113: 3 total, 3 up, 3 in
Oct 11 04:29:39 compute-0 ceph-mon[74243]: pgmap v215: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 3 objects/s recovering
Oct 11 04:29:39 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=113/114 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:29:39 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.a scrub starts
Oct 11 04:29:39 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.a scrub ok
Oct 11 04:29:39 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Oct 11 04:29:39 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Oct 11 04:29:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:40 compute-0 ceph-mon[74243]: osdmap e114: 3 total, 3 up, 3 in
Oct 11 04:29:40 compute-0 ceph-mon[74243]: 9.a scrub starts
Oct 11 04:29:40 compute-0 ceph-mon[74243]: 9.a scrub ok
Oct 11 04:29:40 compute-0 ceph-mon[74243]: 10.1e scrub starts
Oct 11 04:29:40 compute-0 ceph-mon[74243]: 10.1e scrub ok
Oct 11 04:29:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 3 objects/s recovering
Oct 11 04:29:41 compute-0 ceph-mon[74243]: pgmap v217: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 3 objects/s recovering
Oct 11 04:29:41 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct 11 04:29:41 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct 11 04:29:41 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Oct 11 04:29:41 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Oct 11 04:29:42 compute-0 ceph-mon[74243]: 2.16 scrub starts
Oct 11 04:29:42 compute-0 ceph-mon[74243]: 2.16 scrub ok
Oct 11 04:29:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 3 objects/s recovering
Oct 11 04:29:42 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct 11 04:29:42 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct 11 04:29:43 compute-0 ceph-mon[74243]: 11.15 scrub starts
Oct 11 04:29:43 compute-0 ceph-mon[74243]: 11.15 scrub ok
Oct 11 04:29:43 compute-0 ceph-mon[74243]: pgmap v218: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 3 objects/s recovering
Oct 11 04:29:43 compute-0 ceph-mon[74243]: 9.10 scrub starts
Oct 11 04:29:43 compute-0 ceph-mon[74243]: 9.10 scrub ok
Oct 11 04:29:43 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Oct 11 04:29:43 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Oct 11 04:29:43 compute-0 PackageKit[31018]: daemon quit
Oct 11 04:29:43 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 11 04:29:44 compute-0 ceph-mon[74243]: 5.15 scrub starts
Oct 11 04:29:44 compute-0 ceph-mon[74243]: 5.15 scrub ok
Oct 11 04:29:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Oct 11 04:29:44 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct 11 04:29:44 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct 11 04:29:44 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Oct 11 04:29:44 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Oct 11 04:29:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:45 compute-0 ceph-mon[74243]: pgmap v219: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Oct 11 04:29:45 compute-0 ceph-mon[74243]: 9.12 scrub starts
Oct 11 04:29:45 compute-0 ceph-mon[74243]: 9.12 scrub ok
Oct 11 04:29:45 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Oct 11 04:29:45 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Oct 11 04:29:45 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Oct 11 04:29:45 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Oct 11 04:29:46 compute-0 ceph-mon[74243]: 8.11 scrub starts
Oct 11 04:29:46 compute-0 ceph-mon[74243]: 8.11 scrub ok
Oct 11 04:29:46 compute-0 ceph-mon[74243]: 5.14 scrub starts
Oct 11 04:29:46 compute-0 ceph-mon[74243]: 5.14 scrub ok
Oct 11 04:29:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Oct 11 04:29:46 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.13 deep-scrub starts
Oct 11 04:29:46 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.13 deep-scrub ok
Oct 11 04:29:46 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Oct 11 04:29:46 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Oct 11 04:29:47 compute-0 ceph-mon[74243]: 11.11 scrub starts
Oct 11 04:29:47 compute-0 ceph-mon[74243]: 11.11 scrub ok
Oct 11 04:29:47 compute-0 ceph-mon[74243]: pgmap v220: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Oct 11 04:29:47 compute-0 ceph-mon[74243]: 2.13 deep-scrub starts
Oct 11 04:29:47 compute-0 ceph-mon[74243]: 2.13 deep-scrub ok
Oct 11 04:29:47 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.14 deep-scrub starts
Oct 11 04:29:47 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.14 deep-scrub ok
Oct 11 04:29:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 11 04:29:48 compute-0 ceph-mon[74243]: 8.12 scrub starts
Oct 11 04:29:48 compute-0 ceph-mon[74243]: 8.12 scrub ok
Oct 11 04:29:48 compute-0 ceph-mon[74243]: 9.14 deep-scrub starts
Oct 11 04:29:48 compute-0 ceph-mon[74243]: 9.14 deep-scrub ok
Oct 11 04:29:48 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Oct 11 04:29:48 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Oct 11 04:29:48 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Oct 11 04:29:48 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Oct 11 04:29:48 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct 11 04:29:48 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct 11 04:29:49 compute-0 ceph-mon[74243]: pgmap v221: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 11 04:29:49 compute-0 ceph-mon[74243]: 9.1a scrub starts
Oct 11 04:29:49 compute-0 ceph-mon[74243]: 10.16 scrub starts
Oct 11 04:29:49 compute-0 ceph-mon[74243]: 10.16 scrub ok
Oct 11 04:29:49 compute-0 ceph-mon[74243]: 9.1a scrub ok
Oct 11 04:29:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Oct 11 04:29:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Oct 11 04:29:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 11 04:29:50 compute-0 ceph-mon[74243]: 3.18 scrub starts
Oct 11 04:29:50 compute-0 ceph-mon[74243]: 3.18 scrub ok
Oct 11 04:29:50 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Oct 11 04:29:50 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Oct 11 04:29:51 compute-0 ceph-mon[74243]: 7.1c scrub starts
Oct 11 04:29:51 compute-0 ceph-mon[74243]: 7.1c scrub ok
Oct 11 04:29:51 compute-0 ceph-mon[74243]: pgmap v222: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 11 04:29:51 compute-0 ceph-mon[74243]: 11.5 scrub starts
Oct 11 04:29:51 compute-0 ceph-mon[74243]: 11.5 scrub ok
Oct 11 04:29:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 11 04:29:52 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Oct 11 04:29:52 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Oct 11 04:29:52 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Oct 11 04:29:52 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Oct 11 04:29:53 compute-0 ceph-mon[74243]: pgmap v223: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 11 04:29:53 compute-0 ceph-mon[74243]: 2.11 scrub starts
Oct 11 04:29:53 compute-0 ceph-mon[74243]: 2.11 scrub ok
Oct 11 04:29:53 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.e deep-scrub starts
Oct 11 04:29:53 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.e deep-scrub ok
Oct 11 04:29:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:54 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct 11 04:29:54 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct 11 04:29:54 compute-0 ceph-mon[74243]: 7.2 scrub starts
Oct 11 04:29:54 compute-0 ceph-mon[74243]: 7.2 scrub ok
Oct 11 04:29:54 compute-0 ceph-mon[74243]: 10.e deep-scrub starts
Oct 11 04:29:54 compute-0 ceph-mon[74243]: 10.e deep-scrub ok
Oct 11 04:29:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:55 compute-0 ceph-mon[74243]: pgmap v224: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:55 compute-0 ceph-mon[74243]: 3.1f scrub starts
Oct 11 04:29:55 compute-0 ceph-mon[74243]: 3.1f scrub ok
Oct 11 04:29:55 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct 11 04:29:55 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:29:56
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'images', 'vms', 'default.rgw.log']
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:29:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:29:56 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.1b deep-scrub starts
Oct 11 04:29:56 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.1b deep-scrub ok
Oct 11 04:29:57 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Oct 11 04:29:57 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Oct 11 04:29:57 compute-0 ceph-mon[74243]: 7.1 scrub starts
Oct 11 04:29:57 compute-0 ceph-mon[74243]: 7.1 scrub ok
Oct 11 04:29:57 compute-0 ceph-mon[74243]: pgmap v225: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:57 compute-0 ceph-mon[74243]: 7.1b deep-scrub starts
Oct 11 04:29:57 compute-0 ceph-mon[74243]: 7.1b deep-scrub ok
Oct 11 04:29:57 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.b scrub starts
Oct 11 04:29:57 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.b scrub ok
Oct 11 04:29:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Oct 11 04:29:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Oct 11 04:29:58 compute-0 ceph-mon[74243]: 11.7 scrub starts
Oct 11 04:29:58 compute-0 ceph-mon[74243]: 11.7 scrub ok
Oct 11 04:29:58 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct 11 04:29:58 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct 11 04:29:59 compute-0 ceph-mon[74243]: 11.b scrub starts
Oct 11 04:29:59 compute-0 ceph-mon[74243]: 11.b scrub ok
Oct 11 04:29:59 compute-0 ceph-mon[74243]: pgmap v226: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:59 compute-0 ceph-mon[74243]: 11.17 scrub starts
Oct 11 04:29:59 compute-0 ceph-mon[74243]: 11.17 scrub ok
Oct 11 04:29:59 compute-0 ceph-mon[74243]: 7.5 scrub starts
Oct 11 04:29:59 compute-0 ceph-mon[74243]: 7.5 scrub ok
Oct 11 04:29:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:01 compute-0 ceph-mon[74243]: pgmap v227: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:03 compute-0 ceph-mon[74243]: pgmap v228: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:04 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.a scrub starts
Oct 11 04:30:04 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.a scrub ok
Oct 11 04:30:04 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.14 deep-scrub starts
Oct 11 04:30:04 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.14 deep-scrub ok
Oct 11 04:30:04 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.d scrub starts
Oct 11 04:30:04 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.d scrub ok
Oct 11 04:30:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:05 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.18 deep-scrub starts
Oct 11 04:30:05 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.18 deep-scrub ok
Oct 11 04:30:05 compute-0 ceph-mon[74243]: pgmap v229: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:05 compute-0 ceph-mon[74243]: 11.a scrub starts
Oct 11 04:30:05 compute-0 ceph-mon[74243]: 11.a scrub ok
Oct 11 04:30:05 compute-0 ceph-mon[74243]: 11.14 deep-scrub starts
Oct 11 04:30:05 compute-0 ceph-mon[74243]: 11.14 deep-scrub ok
Oct 11 04:30:05 compute-0 ceph-mon[74243]: 11.d scrub starts
Oct 11 04:30:05 compute-0 ceph-mon[74243]: 11.d scrub ok
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:30:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:06 compute-0 ceph-mon[74243]: 7.18 deep-scrub starts
Oct 11 04:30:06 compute-0 ceph-mon[74243]: 7.18 deep-scrub ok
Oct 11 04:30:07 compute-0 ceph-mon[74243]: pgmap v230: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:08 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.c scrub starts
Oct 11 04:30:08 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.c scrub ok
Oct 11 04:30:09 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Oct 11 04:30:09 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Oct 11 04:30:09 compute-0 ceph-mon[74243]: pgmap v231: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:09 compute-0 ceph-mon[74243]: 11.c scrub starts
Oct 11 04:30:09 compute-0 ceph-mon[74243]: 11.c scrub ok
Oct 11 04:30:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:10 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct 11 04:30:10 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct 11 04:30:10 compute-0 ceph-mon[74243]: 3.1b scrub starts
Oct 11 04:30:10 compute-0 ceph-mon[74243]: 3.1b scrub ok
Oct 11 04:30:10 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Oct 11 04:30:10 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Oct 11 04:30:11 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.13 deep-scrub starts
Oct 11 04:30:11 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.13 deep-scrub ok
Oct 11 04:30:11 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct 11 04:30:11 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct 11 04:30:11 compute-0 ceph-mon[74243]: pgmap v232: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:11 compute-0 ceph-mon[74243]: 7.1f scrub starts
Oct 11 04:30:11 compute-0 ceph-mon[74243]: 7.1f scrub ok
Oct 11 04:30:11 compute-0 ceph-mon[74243]: 11.9 scrub starts
Oct 11 04:30:11 compute-0 ceph-mon[74243]: 11.9 scrub ok
Oct 11 04:30:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:12 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct 11 04:30:12 compute-0 ceph-mon[74243]: 11.13 deep-scrub starts
Oct 11 04:30:12 compute-0 ceph-mon[74243]: 11.13 deep-scrub ok
Oct 11 04:30:12 compute-0 ceph-mon[74243]: 7.e scrub starts
Oct 11 04:30:12 compute-0 ceph-mon[74243]: 7.e scrub ok
Oct 11 04:30:12 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct 11 04:30:13 compute-0 ceph-mon[74243]: pgmap v233: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:13 compute-0 ceph-mon[74243]: 7.c scrub starts
Oct 11 04:30:13 compute-0 ceph-mon[74243]: 7.c scrub ok
Oct 11 04:30:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:14 compute-0 ceph-mon[74243]: pgmap v234: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:15 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct 11 04:30:15 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct 11 04:30:15 compute-0 ceph-mon[74243]: 8.10 scrub starts
Oct 11 04:30:15 compute-0 ceph-mon[74243]: 8.10 scrub ok
Oct 11 04:30:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:16 compute-0 sudo[107067]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:16 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Oct 11 04:30:16 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Oct 11 04:30:16 compute-0 sudo[107365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhctvmdwoqnqnhzgtonelcrdgskhzllg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157016.465027-128-28696090322674/AnsiballZ_command.py'
Oct 11 04:30:16 compute-0 sudo[107365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:16 compute-0 ceph-mon[74243]: pgmap v235: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:16 compute-0 ceph-mon[74243]: 11.16 scrub starts
Oct 11 04:30:16 compute-0 ceph-mon[74243]: 11.16 scrub ok
Oct 11 04:30:17 compute-0 python3.9[107367]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:30:17 compute-0 sudo[107369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:30:17 compute-0 sudo[107369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:17 compute-0 sudo[107369]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:17 compute-0 sudo[107399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:30:17 compute-0 sudo[107399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:17 compute-0 sudo[107399]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:17 compute-0 sudo[107424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:30:17 compute-0 sudo[107424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:17 compute-0 sudo[107424]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:17 compute-0 sudo[107449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:30:17 compute-0 sudo[107449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:17 compute-0 sudo[107365]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:17 compute-0 sudo[107449]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:17 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:30:17 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:30:17 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:30:17 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:30:17 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:30:17 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:30:17 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev de7a4549-61d6-4937-b174-cee58f55d7d2 does not exist
Oct 11 04:30:17 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 593d15a6-9cd1-4389-bfae-0901bd23e822 does not exist
Oct 11 04:30:17 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 26d92004-5337-4c92-964b-02762a118a9e does not exist
Oct 11 04:30:17 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:30:17 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:30:17 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:30:17 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:30:17 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:30:17 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:30:18 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:30:18 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:30:18 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:30:18 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:30:18 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:30:18 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:30:18 compute-0 sudo[107669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:30:18 compute-0 sudo[107669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:18 compute-0 sudo[107669]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:18 compute-0 sudo[107721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:30:18 compute-0 sudo[107721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:18 compute-0 sudo[107721]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:18 compute-0 sudo[107759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:30:18 compute-0 sudo[107759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:18 compute-0 sudo[107759]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:18 compute-0 sudo[107784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:30:18 compute-0 sudo[107784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:18 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Oct 11 04:30:18 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Oct 11 04:30:18 compute-0 podman[107884]: 2025-10-11 04:30:18.633477083 +0000 UTC m=+0.054666233 container create 0ba73983863ab2f98f4fac7c02e742c5cc7cdfa30bd7c6b550ccbf36f309a35a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_zhukovsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:30:18 compute-0 systemd[1]: Started libpod-conmon-0ba73983863ab2f98f4fac7c02e742c5cc7cdfa30bd7c6b550ccbf36f309a35a.scope.
Oct 11 04:30:18 compute-0 podman[107884]: 2025-10-11 04:30:18.603317412 +0000 UTC m=+0.024506612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:30:18 compute-0 sudo[107938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcibqvyqfjxzhkmhnoyfxymvdoczobtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157018.003717-136-211355303625164/AnsiballZ_selinux.py'
Oct 11 04:30:18 compute-0 sudo[107938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:30:18 compute-0 podman[107884]: 2025-10-11 04:30:18.745042814 +0000 UTC m=+0.166231974 container init 0ba73983863ab2f98f4fac7c02e742c5cc7cdfa30bd7c6b550ccbf36f309a35a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:30:18 compute-0 podman[107884]: 2025-10-11 04:30:18.753285049 +0000 UTC m=+0.174474179 container start 0ba73983863ab2f98f4fac7c02e742c5cc7cdfa30bd7c6b550ccbf36f309a35a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:30:18 compute-0 podman[107884]: 2025-10-11 04:30:18.759128495 +0000 UTC m=+0.180317645 container attach 0ba73983863ab2f98f4fac7c02e742c5cc7cdfa30bd7c6b550ccbf36f309a35a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:30:18 compute-0 gifted_zhukovsky[107939]: 167 167
Oct 11 04:30:18 compute-0 systemd[1]: libpod-0ba73983863ab2f98f4fac7c02e742c5cc7cdfa30bd7c6b550ccbf36f309a35a.scope: Deactivated successfully.
Oct 11 04:30:18 compute-0 conmon[107939]: conmon 0ba73983863ab2f98f4f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ba73983863ab2f98f4fac7c02e742c5cc7cdfa30bd7c6b550ccbf36f309a35a.scope/container/memory.events
Oct 11 04:30:18 compute-0 podman[107884]: 2025-10-11 04:30:18.762322165 +0000 UTC m=+0.183511325 container died 0ba73983863ab2f98f4fac7c02e742c5cc7cdfa30bd7c6b550ccbf36f309a35a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:30:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-86ca8b14841afccd4ae6dea42fc09244007b6f553c006676b2c68dcc744d9e21-merged.mount: Deactivated successfully.
Oct 11 04:30:18 compute-0 podman[107884]: 2025-10-11 04:30:18.810823413 +0000 UTC m=+0.232012533 container remove 0ba73983863ab2f98f4fac7c02e742c5cc7cdfa30bd7c6b550ccbf36f309a35a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:30:18 compute-0 systemd[1]: libpod-conmon-0ba73983863ab2f98f4fac7c02e742c5cc7cdfa30bd7c6b550ccbf36f309a35a.scope: Deactivated successfully.
Oct 11 04:30:18 compute-0 podman[107965]: 2025-10-11 04:30:18.998708415 +0000 UTC m=+0.073737499 container create e2b84538d76637c4088edda20557f60a6c7c13248589bb166c2d078fca6937fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:30:19 compute-0 ceph-mon[74243]: pgmap v236: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:19 compute-0 ceph-mon[74243]: 11.1d scrub starts
Oct 11 04:30:19 compute-0 ceph-mon[74243]: 11.1d scrub ok
Oct 11 04:30:19 compute-0 python3.9[107943]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 11 04:30:19 compute-0 sudo[107938]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:19 compute-0 podman[107965]: 2025-10-11 04:30:18.969892157 +0000 UTC m=+0.044921291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:30:19 compute-0 systemd[1]: Started libpod-conmon-e2b84538d76637c4088edda20557f60a6c7c13248589bb166c2d078fca6937fb.scope.
Oct 11 04:30:19 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:30:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a669fe1b6dfade2c90874130bd21fa8c1222dec720fd4e0ee2b287032cca58f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a669fe1b6dfade2c90874130bd21fa8c1222dec720fd4e0ee2b287032cca58f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a669fe1b6dfade2c90874130bd21fa8c1222dec720fd4e0ee2b287032cca58f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a669fe1b6dfade2c90874130bd21fa8c1222dec720fd4e0ee2b287032cca58f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a669fe1b6dfade2c90874130bd21fa8c1222dec720fd4e0ee2b287032cca58f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:19 compute-0 podman[107965]: 2025-10-11 04:30:19.124297125 +0000 UTC m=+0.199326269 container init e2b84538d76637c4088edda20557f60a6c7c13248589bb166c2d078fca6937fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ritchie, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:30:19 compute-0 podman[107965]: 2025-10-11 04:30:19.138529109 +0000 UTC m=+0.213558153 container start e2b84538d76637c4088edda20557f60a6c7c13248589bb166c2d078fca6937fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ritchie, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:30:19 compute-0 podman[107965]: 2025-10-11 04:30:19.142762165 +0000 UTC m=+0.217791309 container attach e2b84538d76637c4088edda20557f60a6c7c13248589bb166c2d078fca6937fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ritchie, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:30:19 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Oct 11 04:30:19 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Oct 11 04:30:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:19 compute-0 sudo[108141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agykdgkyidkybyktkfxppikjsbjghiva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157019.5509794-147-220276002371040/AnsiballZ_command.py'
Oct 11 04:30:19 compute-0 sudo[108141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:20 compute-0 python3.9[108145]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 11 04:30:20 compute-0 sudo[108141]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:20 compute-0 intelligent_ritchie[107981]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:30:20 compute-0 intelligent_ritchie[107981]: --> relative data size: 1.0
Oct 11 04:30:20 compute-0 intelligent_ritchie[107981]: --> All data devices are unavailable
Oct 11 04:30:20 compute-0 systemd[1]: libpod-e2b84538d76637c4088edda20557f60a6c7c13248589bb166c2d078fca6937fb.scope: Deactivated successfully.
Oct 11 04:30:20 compute-0 podman[107965]: 2025-10-11 04:30:20.24077404 +0000 UTC m=+1.315803094 container died e2b84538d76637c4088edda20557f60a6c7c13248589bb166c2d078fca6937fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:30:20 compute-0 systemd[1]: libpod-e2b84538d76637c4088edda20557f60a6c7c13248589bb166c2d078fca6937fb.scope: Consumed 1.016s CPU time.
Oct 11 04:30:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a669fe1b6dfade2c90874130bd21fa8c1222dec720fd4e0ee2b287032cca58f-merged.mount: Deactivated successfully.
Oct 11 04:30:20 compute-0 podman[107965]: 2025-10-11 04:30:20.306432197 +0000 UTC m=+1.381461241 container remove e2b84538d76637c4088edda20557f60a6c7c13248589bb166c2d078fca6937fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:30:20 compute-0 systemd[1]: libpod-conmon-e2b84538d76637c4088edda20557f60a6c7c13248589bb166c2d078fca6937fb.scope: Deactivated successfully.
Oct 11 04:30:20 compute-0 sudo[107784]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:20 compute-0 sudo[108271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:30:20 compute-0 sudo[108271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:20 compute-0 sudo[108271]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:20 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct 11 04:30:20 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct 11 04:30:20 compute-0 sudo[108319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:30:20 compute-0 sudo[108319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:20 compute-0 sudo[108319]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:20 compute-0 sudo[108373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgvdkowgvfbpxltnfspvtohpzhqrzozr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157020.2308729-155-165157483125848/AnsiballZ_file.py'
Oct 11 04:30:20 compute-0 sudo[108373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:20 compute-0 sudo[108374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:30:20 compute-0 sudo[108374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:20 compute-0 sudo[108374]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:20 compute-0 sudo[108401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:30:20 compute-0 sudo[108401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:20 compute-0 python3.9[108393]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:30:20 compute-0 sudo[108373]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:20 compute-0 podman[108498]: 2025-10-11 04:30:20.961057531 +0000 UTC m=+0.045216678 container create 489bd9658922786b63494752ed9b3b792b36ee167af54e0f9433f10ff2c272da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:30:21 compute-0 systemd[1]: Started libpod-conmon-489bd9658922786b63494752ed9b3b792b36ee167af54e0f9433f10ff2c272da.scope.
Oct 11 04:30:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:30:21 compute-0 podman[108498]: 2025-10-11 04:30:20.94014917 +0000 UTC m=+0.024308397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:30:21 compute-0 podman[108498]: 2025-10-11 04:30:21.047895916 +0000 UTC m=+0.132055103 container init 489bd9658922786b63494752ed9b3b792b36ee167af54e0f9433f10ff2c272da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_spence, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:30:21 compute-0 podman[108498]: 2025-10-11 04:30:21.05370354 +0000 UTC m=+0.137862677 container start 489bd9658922786b63494752ed9b3b792b36ee167af54e0f9433f10ff2c272da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_spence, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:30:21 compute-0 podman[108498]: 2025-10-11 04:30:21.056615463 +0000 UTC m=+0.140774660 container attach 489bd9658922786b63494752ed9b3b792b36ee167af54e0f9433f10ff2c272da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_spence, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:30:21 compute-0 inspiring_spence[108536]: 167 167
Oct 11 04:30:21 compute-0 systemd[1]: libpod-489bd9658922786b63494752ed9b3b792b36ee167af54e0f9433f10ff2c272da.scope: Deactivated successfully.
Oct 11 04:30:21 compute-0 podman[108498]: 2025-10-11 04:30:21.061151736 +0000 UTC m=+0.145310893 container died 489bd9658922786b63494752ed9b3b792b36ee167af54e0f9433f10ff2c272da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:30:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-61bf921d678bb6d67c25c99478436b56bf6ea7f02c9660059720813c10c05971-merged.mount: Deactivated successfully.
Oct 11 04:30:21 compute-0 podman[108498]: 2025-10-11 04:30:21.097296077 +0000 UTC m=+0.181455224 container remove 489bd9658922786b63494752ed9b3b792b36ee167af54e0f9433f10ff2c272da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_spence, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 04:30:21 compute-0 systemd[1]: libpod-conmon-489bd9658922786b63494752ed9b3b792b36ee167af54e0f9433f10ff2c272da.scope: Deactivated successfully.
Oct 11 04:30:21 compute-0 ceph-mon[74243]: 11.3 scrub starts
Oct 11 04:30:21 compute-0 ceph-mon[74243]: 11.3 scrub ok
Oct 11 04:30:21 compute-0 ceph-mon[74243]: pgmap v237: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:21 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct 11 04:30:21 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct 11 04:30:21 compute-0 podman[108588]: 2025-10-11 04:30:21.299861705 +0000 UTC m=+0.059760170 container create 0a3cdb2b32c7b36ba7c50194a2d7e34212c6b391cc8e273eb70a3e9b97f413ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:30:21 compute-0 systemd[1]: Started libpod-conmon-0a3cdb2b32c7b36ba7c50194a2d7e34212c6b391cc8e273eb70a3e9b97f413ef.scope.
Oct 11 04:30:21 compute-0 podman[108588]: 2025-10-11 04:30:21.270528054 +0000 UTC m=+0.030426569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:30:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:30:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb0605011173072445a4b583eebe25f3c2e30ddaffd40db95b89cf766d5a5a8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb0605011173072445a4b583eebe25f3c2e30ddaffd40db95b89cf766d5a5a8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb0605011173072445a4b583eebe25f3c2e30ddaffd40db95b89cf766d5a5a8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb0605011173072445a4b583eebe25f3c2e30ddaffd40db95b89cf766d5a5a8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:21 compute-0 podman[108588]: 2025-10-11 04:30:21.404926664 +0000 UTC m=+0.164825119 container init 0a3cdb2b32c7b36ba7c50194a2d7e34212c6b391cc8e273eb70a3e9b97f413ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_nash, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:30:21 compute-0 podman[108588]: 2025-10-11 04:30:21.41763783 +0000 UTC m=+0.177536255 container start 0a3cdb2b32c7b36ba7c50194a2d7e34212c6b391cc8e273eb70a3e9b97f413ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_nash, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:30:21 compute-0 podman[108588]: 2025-10-11 04:30:21.4212309 +0000 UTC m=+0.181129335 container attach 0a3cdb2b32c7b36ba7c50194a2d7e34212c6b391cc8e273eb70a3e9b97f413ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_nash, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:30:21 compute-0 sudo[108678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffrwqwplztyaecgrtevwqoqieplifxas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157020.9441261-163-62574914509259/AnsiballZ_mount.py'
Oct 11 04:30:21 compute-0 sudo[108678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:21 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct 11 04:30:21 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct 11 04:30:21 compute-0 python3.9[108680]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 11 04:30:21 compute-0 sudo[108678]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:22 compute-0 kind_nash[108646]: {
Oct 11 04:30:22 compute-0 kind_nash[108646]:     "0": [
Oct 11 04:30:22 compute-0 kind_nash[108646]:         {
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "devices": [
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "/dev/loop3"
Oct 11 04:30:22 compute-0 kind_nash[108646]:             ],
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "lv_name": "ceph_lv0",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "lv_size": "21470642176",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "name": "ceph_lv0",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "tags": {
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.cluster_name": "ceph",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.crush_device_class": "",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.encrypted": "0",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.osd_id": "0",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.type": "block",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.vdo": "0"
Oct 11 04:30:22 compute-0 kind_nash[108646]:             },
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "type": "block",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "vg_name": "ceph_vg0"
Oct 11 04:30:22 compute-0 kind_nash[108646]:         }
Oct 11 04:30:22 compute-0 kind_nash[108646]:     ],
Oct 11 04:30:22 compute-0 kind_nash[108646]:     "1": [
Oct 11 04:30:22 compute-0 kind_nash[108646]:         {
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "devices": [
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "/dev/loop4"
Oct 11 04:30:22 compute-0 kind_nash[108646]:             ],
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "lv_name": "ceph_lv1",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "lv_size": "21470642176",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "name": "ceph_lv1",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "tags": {
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.cluster_name": "ceph",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.crush_device_class": "",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.encrypted": "0",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.osd_id": "1",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.type": "block",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.vdo": "0"
Oct 11 04:30:22 compute-0 kind_nash[108646]:             },
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "type": "block",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "vg_name": "ceph_vg1"
Oct 11 04:30:22 compute-0 kind_nash[108646]:         }
Oct 11 04:30:22 compute-0 kind_nash[108646]:     ],
Oct 11 04:30:22 compute-0 kind_nash[108646]:     "2": [
Oct 11 04:30:22 compute-0 kind_nash[108646]:         {
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "devices": [
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "/dev/loop5"
Oct 11 04:30:22 compute-0 kind_nash[108646]:             ],
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "lv_name": "ceph_lv2",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "lv_size": "21470642176",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "name": "ceph_lv2",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "tags": {
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.cluster_name": "ceph",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.crush_device_class": "",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.encrypted": "0",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.osd_id": "2",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.type": "block",
Oct 11 04:30:22 compute-0 kind_nash[108646]:                 "ceph.vdo": "0"
Oct 11 04:30:22 compute-0 kind_nash[108646]:             },
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "type": "block",
Oct 11 04:30:22 compute-0 kind_nash[108646]:             "vg_name": "ceph_vg2"
Oct 11 04:30:22 compute-0 kind_nash[108646]:         }
Oct 11 04:30:22 compute-0 kind_nash[108646]:     ]
Oct 11 04:30:22 compute-0 kind_nash[108646]: }
Oct 11 04:30:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:22 compute-0 systemd[1]: libpod-0a3cdb2b32c7b36ba7c50194a2d7e34212c6b391cc8e273eb70a3e9b97f413ef.scope: Deactivated successfully.
Oct 11 04:30:22 compute-0 podman[108588]: 2025-10-11 04:30:22.132107137 +0000 UTC m=+0.892005562 container died 0a3cdb2b32c7b36ba7c50194a2d7e34212c6b391cc8e273eb70a3e9b97f413ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:30:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb0605011173072445a4b583eebe25f3c2e30ddaffd40db95b89cf766d5a5a8a-merged.mount: Deactivated successfully.
Oct 11 04:30:22 compute-0 ceph-mon[74243]: 11.2 scrub starts
Oct 11 04:30:22 compute-0 ceph-mon[74243]: 11.2 scrub ok
Oct 11 04:30:22 compute-0 ceph-mon[74243]: 5.11 scrub starts
Oct 11 04:30:22 compute-0 ceph-mon[74243]: 5.11 scrub ok
Oct 11 04:30:22 compute-0 podman[108588]: 2025-10-11 04:30:22.199487056 +0000 UTC m=+0.959385521 container remove 0a3cdb2b32c7b36ba7c50194a2d7e34212c6b391cc8e273eb70a3e9b97f413ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_nash, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:30:22 compute-0 systemd[1]: libpod-conmon-0a3cdb2b32c7b36ba7c50194a2d7e34212c6b391cc8e273eb70a3e9b97f413ef.scope: Deactivated successfully.
Oct 11 04:30:22 compute-0 sudo[108401]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:22 compute-0 sudo[108722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:30:22 compute-0 sudo[108722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:22 compute-0 sudo[108722]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:22 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct 11 04:30:22 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct 11 04:30:22 compute-0 sudo[108783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:30:22 compute-0 sudo[108783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:22 compute-0 sudo[108783]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:22 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Oct 11 04:30:22 compute-0 sudo[108845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:30:22 compute-0 sudo[108845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:22 compute-0 sudo[108845]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:22 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Oct 11 04:30:22 compute-0 sudo[108892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:30:22 compute-0 sudo[108892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:22 compute-0 sudo[108947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hapmuxxagqalzcdqwimwitphtuislvxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157022.319334-191-45727723255272/AnsiballZ_file.py'
Oct 11 04:30:22 compute-0 sudo[108947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:22 compute-0 python3.9[108949]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:30:22 compute-0 sudo[108947]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:23 compute-0 podman[109015]: 2025-10-11 04:30:23.129515994 +0000 UTC m=+0.070595031 container create ffda1d169e73e939e768ba6c5614fa2718e7d7afe76fa152c7017ca564edcb57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sinoussi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:30:23 compute-0 systemd[1]: Started libpod-conmon-ffda1d169e73e939e768ba6c5614fa2718e7d7afe76fa152c7017ca564edcb57.scope.
Oct 11 04:30:23 compute-0 ceph-mon[74243]: 11.8 scrub starts
Oct 11 04:30:23 compute-0 ceph-mon[74243]: 11.8 scrub ok
Oct 11 04:30:23 compute-0 ceph-mon[74243]: pgmap v238: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:23 compute-0 ceph-mon[74243]: 8.14 scrub starts
Oct 11 04:30:23 compute-0 ceph-mon[74243]: 8.14 scrub ok
Oct 11 04:30:23 compute-0 podman[109015]: 2025-10-11 04:30:23.096388208 +0000 UTC m=+0.037467305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:30:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:30:23 compute-0 podman[109015]: 2025-10-11 04:30:23.25375797 +0000 UTC m=+0.194837047 container init ffda1d169e73e939e768ba6c5614fa2718e7d7afe76fa152c7017ca564edcb57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sinoussi, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:30:23 compute-0 podman[109015]: 2025-10-11 04:30:23.263732989 +0000 UTC m=+0.204811986 container start ffda1d169e73e939e768ba6c5614fa2718e7d7afe76fa152c7017ca564edcb57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:30:23 compute-0 podman[109015]: 2025-10-11 04:30:23.267076712 +0000 UTC m=+0.208155749 container attach ffda1d169e73e939e768ba6c5614fa2718e7d7afe76fa152c7017ca564edcb57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:30:23 compute-0 infallible_sinoussi[109075]: 167 167
Oct 11 04:30:23 compute-0 systemd[1]: libpod-ffda1d169e73e939e768ba6c5614fa2718e7d7afe76fa152c7017ca564edcb57.scope: Deactivated successfully.
Oct 11 04:30:23 compute-0 podman[109015]: 2025-10-11 04:30:23.274116647 +0000 UTC m=+0.215195704 container died ffda1d169e73e939e768ba6c5614fa2718e7d7afe76fa152c7017ca564edcb57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sinoussi, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:30:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2594832b80f14be85c5c2eafe51d163454f0c4c6d1698c1d3d839aac58a390e-merged.mount: Deactivated successfully.
Oct 11 04:30:23 compute-0 podman[109015]: 2025-10-11 04:30:23.325292443 +0000 UTC m=+0.266371450 container remove ffda1d169e73e939e768ba6c5614fa2718e7d7afe76fa152c7017ca564edcb57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sinoussi, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:30:23 compute-0 systemd[1]: libpod-conmon-ffda1d169e73e939e768ba6c5614fa2718e7d7afe76fa152c7017ca564edcb57.scope: Deactivated successfully.
Oct 11 04:30:23 compute-0 sudo[109177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axdopudowlxhfboghkfwzpkhcvbqdokm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157023.129678-199-108957476576625/AnsiballZ_stat.py'
Oct 11 04:30:23 compute-0 sudo[109177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:23 compute-0 podman[109183]: 2025-10-11 04:30:23.566164176 +0000 UTC m=+0.059418032 container create 41ec1d48715f8b7957eac281d15a26201ece0b80455f5601094c4d7e70445c6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:30:23 compute-0 systemd[1]: Started libpod-conmon-41ec1d48715f8b7957eac281d15a26201ece0b80455f5601094c4d7e70445c6d.scope.
Oct 11 04:30:23 compute-0 podman[109183]: 2025-10-11 04:30:23.539415619 +0000 UTC m=+0.032669515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:30:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:30:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612e42db8659f827c39dc4d8295ff2d1f64657100d05670f472a82da79bdda34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612e42db8659f827c39dc4d8295ff2d1f64657100d05670f472a82da79bdda34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612e42db8659f827c39dc4d8295ff2d1f64657100d05670f472a82da79bdda34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612e42db8659f827c39dc4d8295ff2d1f64657100d05670f472a82da79bdda34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:23 compute-0 podman[109183]: 2025-10-11 04:30:23.680233189 +0000 UTC m=+0.173487095 container init 41ec1d48715f8b7957eac281d15a26201ece0b80455f5601094c4d7e70445c6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bouman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:30:23 compute-0 podman[109183]: 2025-10-11 04:30:23.687152371 +0000 UTC m=+0.180406257 container start 41ec1d48715f8b7957eac281d15a26201ece0b80455f5601094c4d7e70445c6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bouman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:30:23 compute-0 podman[109183]: 2025-10-11 04:30:23.691631013 +0000 UTC m=+0.184884909 container attach 41ec1d48715f8b7957eac281d15a26201ece0b80455f5601094c4d7e70445c6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:30:23 compute-0 python3.9[109185]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:30:23 compute-0 sudo[109177]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:24 compute-0 sudo[109281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbzpfrffwdqbuxrdkjnoqgbnjjkengxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157023.129678-199-108957476576625/AnsiballZ_file.py'
Oct 11 04:30:24 compute-0 sudo[109281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:24 compute-0 ceph-mon[74243]: 8.2 scrub starts
Oct 11 04:30:24 compute-0 ceph-mon[74243]: 8.2 scrub ok
Oct 11 04:30:24 compute-0 python3.9[109283]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:30:24 compute-0 sudo[109281]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:24 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Oct 11 04:30:24 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Oct 11 04:30:24 compute-0 busy_bouman[109201]: {
Oct 11 04:30:24 compute-0 busy_bouman[109201]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:30:24 compute-0 busy_bouman[109201]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:30:24 compute-0 busy_bouman[109201]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:30:24 compute-0 busy_bouman[109201]:         "osd_id": 1,
Oct 11 04:30:24 compute-0 busy_bouman[109201]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:30:24 compute-0 busy_bouman[109201]:         "type": "bluestore"
Oct 11 04:30:24 compute-0 busy_bouman[109201]:     },
Oct 11 04:30:24 compute-0 busy_bouman[109201]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:30:24 compute-0 busy_bouman[109201]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:30:24 compute-0 busy_bouman[109201]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:30:24 compute-0 busy_bouman[109201]:         "osd_id": 0,
Oct 11 04:30:24 compute-0 busy_bouman[109201]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:30:24 compute-0 busy_bouman[109201]:         "type": "bluestore"
Oct 11 04:30:24 compute-0 busy_bouman[109201]:     },
Oct 11 04:30:24 compute-0 busy_bouman[109201]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:30:24 compute-0 busy_bouman[109201]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:30:24 compute-0 busy_bouman[109201]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:30:24 compute-0 busy_bouman[109201]:         "osd_id": 2,
Oct 11 04:30:24 compute-0 busy_bouman[109201]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:30:24 compute-0 busy_bouman[109201]:         "type": "bluestore"
Oct 11 04:30:24 compute-0 busy_bouman[109201]:     }
Oct 11 04:30:24 compute-0 busy_bouman[109201]: }
Oct 11 04:30:24 compute-0 systemd[1]: libpod-41ec1d48715f8b7957eac281d15a26201ece0b80455f5601094c4d7e70445c6d.scope: Deactivated successfully.
Oct 11 04:30:24 compute-0 systemd[1]: libpod-41ec1d48715f8b7957eac281d15a26201ece0b80455f5601094c4d7e70445c6d.scope: Consumed 1.066s CPU time.
Oct 11 04:30:24 compute-0 podman[109337]: 2025-10-11 04:30:24.807442452 +0000 UTC m=+0.038586063 container died 41ec1d48715f8b7957eac281d15a26201ece0b80455f5601094c4d7e70445c6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:30:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-612e42db8659f827c39dc4d8295ff2d1f64657100d05670f472a82da79bdda34-merged.mount: Deactivated successfully.
Oct 11 04:30:24 compute-0 podman[109337]: 2025-10-11 04:30:24.880253706 +0000 UTC m=+0.111397217 container remove 41ec1d48715f8b7957eac281d15a26201ece0b80455f5601094c4d7e70445c6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bouman, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:30:24 compute-0 systemd[1]: libpod-conmon-41ec1d48715f8b7957eac281d15a26201ece0b80455f5601094c4d7e70445c6d.scope: Deactivated successfully.
Oct 11 04:30:24 compute-0 sudo[108892]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:30:24 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:30:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:30:24 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:30:24 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 6411b57f-6c47-4cc7-8d12-52a381fc1952 does not exist
Oct 11 04:30:24 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 67e17b0a-36cc-4ee6-a5e0-ea0b9ace4f84 does not exist
Oct 11 04:30:25 compute-0 sudo[109403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:30:25 compute-0 sudo[109403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:25 compute-0 sudo[109403]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:25 compute-0 sudo[109428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:30:25 compute-0 sudo[109428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:25 compute-0 sudo[109428]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:25 compute-0 ceph-mon[74243]: pgmap v239: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:25 compute-0 ceph-mon[74243]: 11.10 scrub starts
Oct 11 04:30:25 compute-0 ceph-mon[74243]: 11.10 scrub ok
Oct 11 04:30:25 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:30:25 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:30:25 compute-0 sudo[109526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eseettrgrmjxsdiheetjpgvncfzvehmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157024.8315525-223-89191657576640/AnsiballZ_getent.py'
Oct 11 04:30:25 compute-0 sudo[109526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:25 compute-0 python3.9[109528]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 11 04:30:25 compute-0 sudo[109526]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:30:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:30:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:30:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:30:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:30:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:30:26 compute-0 sudo[109679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aygkgchqywtedtaenfczsybszuikucvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157025.8657906-233-133486113072211/AnsiballZ_getent.py'
Oct 11 04:30:26 compute-0 sudo[109679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:26 compute-0 python3.9[109681]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 11 04:30:26 compute-0 sudo[109679]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:26 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct 11 04:30:26 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct 11 04:30:27 compute-0 sudo[109832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmcmxdwcyhqtffqpehptmmqpifhjqnwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157026.543097-241-158989619132013/AnsiballZ_group.py'
Oct 11 04:30:27 compute-0 sudo[109832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:27 compute-0 ceph-mon[74243]: pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:27 compute-0 python3.9[109834]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 11 04:30:27 compute-0 sudo[109832]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:27 compute-0 sudo[109984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxqlngxojdehwgrcurldhfcenobfehle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157027.5537877-250-112834268738022/AnsiballZ_file.py'
Oct 11 04:30:27 compute-0 sudo[109984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:28 compute-0 python3.9[109986]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 11 04:30:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:28 compute-0 sudo[109984]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:28 compute-0 ceph-mon[74243]: 7.8 scrub starts
Oct 11 04:30:28 compute-0 ceph-mon[74243]: 7.8 scrub ok
Oct 11 04:30:28 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct 11 04:30:28 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct 11 04:30:28 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.d scrub starts
Oct 11 04:30:28 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.d scrub ok
Oct 11 04:30:29 compute-0 sudo[110136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raibfemwgnogzmkrtvwbchxjhlacteei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157028.6821234-261-180171572301453/AnsiballZ_dnf.py'
Oct 11 04:30:29 compute-0 sudo[110136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:29 compute-0 ceph-mon[74243]: pgmap v241: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:29 compute-0 ceph-mon[74243]: 7.3 scrub starts
Oct 11 04:30:29 compute-0 ceph-mon[74243]: 7.3 scrub ok
Oct 11 04:30:29 compute-0 python3.9[110138]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:30:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:30 compute-0 ceph-mon[74243]: 8.d scrub starts
Oct 11 04:30:30 compute-0 ceph-mon[74243]: 8.d scrub ok
Oct 11 04:30:30 compute-0 sudo[110136]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:30 compute-0 sudo[110289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jodrujuxepgishekyjxjbzmbejbzuxjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157030.6357617-269-4386500641944/AnsiballZ_file.py'
Oct 11 04:30:30 compute-0 sudo[110289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:31 compute-0 python3.9[110291]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:30:31 compute-0 sudo[110289]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:31 compute-0 ceph-mon[74243]: pgmap v242: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:31 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Oct 11 04:30:31 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Oct 11 04:30:31 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.f scrub starts
Oct 11 04:30:31 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.f scrub ok
Oct 11 04:30:31 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.e scrub starts
Oct 11 04:30:31 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.e scrub ok
Oct 11 04:30:31 compute-0 sudo[110441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmstgmrhakanmtwbxwegzdjfdfopgbrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157031.3729398-277-224712717837497/AnsiballZ_stat.py'
Oct 11 04:30:31 compute-0 sudo[110441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:31 compute-0 python3.9[110443]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:30:31 compute-0 sudo[110441]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:32 compute-0 sudo[110519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzhpavdleoahpcndviiemmhghqqblmxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157031.3729398-277-224712717837497/AnsiballZ_file.py'
Oct 11 04:30:32 compute-0 sudo[110519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:32 compute-0 ceph-mon[74243]: 2.17 scrub starts
Oct 11 04:30:32 compute-0 ceph-mon[74243]: 2.17 scrub ok
Oct 11 04:30:32 compute-0 ceph-mon[74243]: 11.f scrub starts
Oct 11 04:30:32 compute-0 ceph-mon[74243]: 11.f scrub ok
Oct 11 04:30:32 compute-0 python3.9[110521]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:30:32 compute-0 sudo[110519]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:33 compute-0 sudo[110671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liltonezwfundmjyfmijvypnwprilgpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157032.7460244-290-168806502275756/AnsiballZ_stat.py'
Oct 11 04:30:33 compute-0 sudo[110671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:33 compute-0 ceph-mon[74243]: 3.e scrub starts
Oct 11 04:30:33 compute-0 ceph-mon[74243]: 3.e scrub ok
Oct 11 04:30:33 compute-0 ceph-mon[74243]: pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:33 compute-0 python3.9[110673]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:30:33 compute-0 sudo[110671]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:33 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.13 deep-scrub starts
Oct 11 04:30:33 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.13 deep-scrub ok
Oct 11 04:30:33 compute-0 sudo[110749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iczralnivftmyheqeblxzytwescewpfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157032.7460244-290-168806502275756/AnsiballZ_file.py'
Oct 11 04:30:33 compute-0 sudo[110749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:33 compute-0 python3.9[110751]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:30:33 compute-0 sudo[110749]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:34 compute-0 ceph-mon[74243]: 5.13 deep-scrub starts
Oct 11 04:30:34 compute-0 ceph-mon[74243]: 5.13 deep-scrub ok
Oct 11 04:30:34 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.15 deep-scrub starts
Oct 11 04:30:34 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.15 deep-scrub ok
Oct 11 04:30:34 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.c scrub starts
Oct 11 04:30:34 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.c scrub ok
Oct 11 04:30:34 compute-0 sudo[110901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnzasbrnjbylqdjvdjlkbtzommpyhrpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157034.2270944-305-21847633193180/AnsiballZ_dnf.py'
Oct 11 04:30:34 compute-0 sudo[110901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:34 compute-0 python3.9[110903]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:30:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:35 compute-0 ceph-mon[74243]: pgmap v244: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:35 compute-0 ceph-mon[74243]: 2.15 deep-scrub starts
Oct 11 04:30:35 compute-0 ceph-mon[74243]: 2.15 deep-scrub ok
Oct 11 04:30:35 compute-0 ceph-mon[74243]: 8.c scrub starts
Oct 11 04:30:35 compute-0 ceph-mon[74243]: 8.c scrub ok
Oct 11 04:30:35 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.12 deep-scrub starts
Oct 11 04:30:35 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.12 deep-scrub ok
Oct 11 04:30:35 compute-0 sudo[110901]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:36 compute-0 ceph-mon[74243]: 5.12 deep-scrub starts
Oct 11 04:30:36 compute-0 ceph-mon[74243]: 5.12 deep-scrub ok
Oct 11 04:30:36 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.f deep-scrub starts
Oct 11 04:30:36 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.f deep-scrub ok
Oct 11 04:30:36 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.a deep-scrub starts
Oct 11 04:30:36 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.a deep-scrub ok
Oct 11 04:30:36 compute-0 python3.9[111054]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:30:37 compute-0 ceph-mon[74243]: pgmap v245: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:37 compute-0 ceph-mon[74243]: 7.f deep-scrub starts
Oct 11 04:30:37 compute-0 ceph-mon[74243]: 7.f deep-scrub ok
Oct 11 04:30:37 compute-0 python3.9[111206]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 11 04:30:38 compute-0 python3.9[111356]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:30:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:38 compute-0 ceph-mon[74243]: 7.a deep-scrub starts
Oct 11 04:30:38 compute-0 ceph-mon[74243]: 7.a deep-scrub ok
Oct 11 04:30:39 compute-0 sudo[111506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbxregbivjejuonecgtjjhezoucvunqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157038.4363508-346-238919409876626/AnsiballZ_systemd.py'
Oct 11 04:30:39 compute-0 sudo[111506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:39 compute-0 ceph-mon[74243]: pgmap v246: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:39 compute-0 python3.9[111508]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:30:39 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 11 04:30:39 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.f scrub starts
Oct 11 04:30:39 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.f scrub ok
Oct 11 04:30:39 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Oct 11 04:30:39 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 11 04:30:39 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 11 04:30:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:39 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 11 04:30:39 compute-0 sudo[111506]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:40 compute-0 ceph-mon[74243]: 8.f scrub starts
Oct 11 04:30:40 compute-0 ceph-mon[74243]: 8.f scrub ok
Oct 11 04:30:40 compute-0 python3.9[111670]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 11 04:30:40 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Oct 11 04:30:40 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Oct 11 04:30:41 compute-0 ceph-mon[74243]: pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:41 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct 11 04:30:41 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct 11 04:30:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:42 compute-0 ceph-mon[74243]: 8.4 scrub starts
Oct 11 04:30:42 compute-0 ceph-mon[74243]: 8.4 scrub ok
Oct 11 04:30:42 compute-0 ceph-mon[74243]: 10.1a scrub starts
Oct 11 04:30:42 compute-0 ceph-mon[74243]: 10.1a scrub ok
Oct 11 04:30:42 compute-0 sudo[111820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pogqpfiaedmmiyquhtelfqnnbeqmtjzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157042.1623828-403-178482765952657/AnsiballZ_systemd.py'
Oct 11 04:30:42 compute-0 sudo[111820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:42 compute-0 python3.9[111822]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:30:42 compute-0 sudo[111820]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:43 compute-0 sudo[111974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quribbgmjvazstbvbrlrhurlaifvszoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157043.0328727-403-227446513213439/AnsiballZ_systemd.py'
Oct 11 04:30:43 compute-0 sudo[111974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:43 compute-0 ceph-mon[74243]: pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:43 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Oct 11 04:30:43 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Oct 11 04:30:43 compute-0 python3.9[111976]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:30:43 compute-0 sudo[111974]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:44 compute-0 sshd-session[105305]: Connection closed by 192.168.122.30 port 36992
Oct 11 04:30:44 compute-0 sshd-session[105302]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:30:44 compute-0 systemd-logind[801]: Session 36 logged out. Waiting for processes to exit.
Oct 11 04:30:44 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Oct 11 04:30:44 compute-0 systemd[1]: session-36.scope: Consumed 1min 5.290s CPU time.
Oct 11 04:30:44 compute-0 systemd-logind[801]: Removed session 36.
Oct 11 04:30:44 compute-0 ceph-mon[74243]: 10.19 scrub starts
Oct 11 04:30:44 compute-0 ceph-mon[74243]: 10.19 scrub ok
Oct 11 04:30:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:45 compute-0 ceph-mon[74243]: pgmap v249: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:45 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct 11 04:30:45 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct 11 04:30:45 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct 11 04:30:45 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct 11 04:30:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:46 compute-0 ceph-mon[74243]: 5.16 scrub starts
Oct 11 04:30:46 compute-0 ceph-mon[74243]: 5.16 scrub ok
Oct 11 04:30:46 compute-0 ceph-mon[74243]: 8.1b scrub starts
Oct 11 04:30:46 compute-0 ceph-mon[74243]: 8.1b scrub ok
Oct 11 04:30:46 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct 11 04:30:46 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct 11 04:30:46 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct 11 04:30:46 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct 11 04:30:47 compute-0 ceph-mon[74243]: pgmap v250: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:47 compute-0 ceph-mon[74243]: 5.9 scrub starts
Oct 11 04:30:47 compute-0 ceph-mon[74243]: 5.9 scrub ok
Oct 11 04:30:47 compute-0 ceph-mon[74243]: 3.a scrub starts
Oct 11 04:30:47 compute-0 ceph-mon[74243]: 3.a scrub ok
Oct 11 04:30:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:49 compute-0 sshd-session[112003]: Accepted publickey for zuul from 192.168.122.30 port 55214 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:30:49 compute-0 systemd-logind[801]: New session 37 of user zuul.
Oct 11 04:30:49 compute-0 systemd[1]: Started Session 37 of User zuul.
Oct 11 04:30:49 compute-0 sshd-session[112003]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:30:49 compute-0 ceph-mon[74243]: pgmap v251: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:50 compute-0 python3.9[112156]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:30:51 compute-0 ceph-mon[74243]: pgmap v252: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:51 compute-0 sudo[112310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-remfoukfwrxsmabmdtgnjfapzdswnlil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157051.0654838-36-44619639580408/AnsiballZ_getent.py'
Oct 11 04:30:51 compute-0 sudo[112310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:51 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Oct 11 04:30:51 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Oct 11 04:30:51 compute-0 python3.9[112312]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 11 04:30:51 compute-0 sudo[112310]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:52 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.f scrub starts
Oct 11 04:30:52 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.f scrub ok
Oct 11 04:30:52 compute-0 ceph-mon[74243]: 11.1a scrub starts
Oct 11 04:30:52 compute-0 ceph-mon[74243]: 11.1a scrub ok
Oct 11 04:30:52 compute-0 sudo[112463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unytdmceppabmsqpgfignfeepohtqcyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157052.1336813-48-111671860564314/AnsiballZ_setup.py'
Oct 11 04:30:52 compute-0 sudo[112463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:52 compute-0 python3.9[112465]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:30:52 compute-0 sudo[112463]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:53 compute-0 ceph-mon[74243]: pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:53 compute-0 ceph-mon[74243]: 5.f scrub starts
Oct 11 04:30:53 compute-0 ceph-mon[74243]: 5.f scrub ok
Oct 11 04:30:53 compute-0 sudo[112547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwjqdsvmvqmnxfznnjrfmlmgehxwiqil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157052.1336813-48-111671860564314/AnsiballZ_dnf.py'
Oct 11 04:30:53 compute-0 sudo[112547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:53 compute-0 python3.9[112549]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 11 04:30:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:54 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct 11 04:30:54 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct 11 04:30:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:54 compute-0 sudo[112547]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:55 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Oct 11 04:30:55 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Oct 11 04:30:55 compute-0 sudo[112700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trqnwnpffesrrrwyavpgoaygevvckujh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157055.071097-62-165911015335380/AnsiballZ_dnf.py'
Oct 11 04:30:55 compute-0 sudo[112700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:55 compute-0 ceph-mon[74243]: pgmap v254: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:55 compute-0 ceph-mon[74243]: 3.11 scrub starts
Oct 11 04:30:55 compute-0 ceph-mon[74243]: 3.11 scrub ok
Oct 11 04:30:55 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct 11 04:30:55 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Oct 11 04:30:55 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct 11 04:30:55 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Oct 11 04:30:55 compute-0 python3.9[112702]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:30:56
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.control', '.mgr', 'volumes', 'images', 'default.rgw.meta', 'backups']
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:30:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:30:56 compute-0 ceph-mon[74243]: 10.6 scrub starts
Oct 11 04:30:56 compute-0 ceph-mon[74243]: 10.6 scrub ok
Oct 11 04:30:56 compute-0 ceph-mon[74243]: 7.4 scrub starts
Oct 11 04:30:56 compute-0 ceph-mon[74243]: 11.1c scrub starts
Oct 11 04:30:56 compute-0 ceph-mon[74243]: 7.4 scrub ok
Oct 11 04:30:56 compute-0 ceph-mon[74243]: 11.1c scrub ok
Oct 11 04:30:56 compute-0 sudo[112700]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:57 compute-0 ceph-mon[74243]: pgmap v255: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:57 compute-0 sudo[112853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exmxtaujlyliinofkiodpxpkzhurmcwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157056.932433-70-281212521507169/AnsiballZ_systemd.py'
Oct 11 04:30:57 compute-0 sudo[112853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:30:57 compute-0 python3.9[112855]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 04:30:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:58 compute-0 ceph-mon[74243]: pgmap v256: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:58 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Oct 11 04:30:58 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Oct 11 04:30:59 compute-0 sudo[112853]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:59 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.d deep-scrub starts
Oct 11 04:30:59 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.d deep-scrub ok
Oct 11 04:30:59 compute-0 ceph-mon[74243]: 11.1b scrub starts
Oct 11 04:30:59 compute-0 ceph-mon[74243]: 11.1b scrub ok
Oct 11 04:30:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:00 compute-0 python3.9[113008]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:31:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:00 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.b scrub starts
Oct 11 04:31:00 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.b scrub ok
Oct 11 04:31:00 compute-0 ceph-mon[74243]: 2.d deep-scrub starts
Oct 11 04:31:00 compute-0 ceph-mon[74243]: 2.d deep-scrub ok
Oct 11 04:31:00 compute-0 ceph-mon[74243]: pgmap v257: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:00 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.18 deep-scrub starts
Oct 11 04:31:00 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.18 deep-scrub ok
Oct 11 04:31:00 compute-0 sudo[113158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipobgjhkartciqmjeerfpfsgwgvjjiex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157060.2795155-88-9024707222222/AnsiballZ_sefcontext.py'
Oct 11 04:31:00 compute-0 sudo[113158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:01 compute-0 python3.9[113160]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 11 04:31:01 compute-0 sudo[113158]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:01 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Oct 11 04:31:01 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Oct 11 04:31:01 compute-0 ceph-mon[74243]: 10.b scrub starts
Oct 11 04:31:01 compute-0 ceph-mon[74243]: 10.b scrub ok
Oct 11 04:31:01 compute-0 ceph-mon[74243]: 11.18 deep-scrub starts
Oct 11 04:31:01 compute-0 ceph-mon[74243]: 11.18 deep-scrub ok
Oct 11 04:31:01 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.e scrub starts
Oct 11 04:31:01 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.e scrub ok
Oct 11 04:31:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:02 compute-0 python3.9[113310]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:31:02 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.b scrub starts
Oct 11 04:31:02 compute-0 ceph-mon[74243]: 11.1e scrub starts
Oct 11 04:31:02 compute-0 ceph-mon[74243]: 11.1e scrub ok
Oct 11 04:31:02 compute-0 ceph-mon[74243]: 11.e scrub starts
Oct 11 04:31:02 compute-0 ceph-mon[74243]: 11.e scrub ok
Oct 11 04:31:02 compute-0 ceph-mon[74243]: pgmap v258: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:02 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.b scrub ok
Oct 11 04:31:03 compute-0 sudo[113466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efwaltbpdjgoetmnygzmjzyvavwuqqol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157062.751322-106-88804305717873/AnsiballZ_dnf.py'
Oct 11 04:31:03 compute-0 sudo[113466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:03 compute-0 python3.9[113468]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:31:03 compute-0 ceph-mon[74243]: 8.b scrub starts
Oct 11 04:31:03 compute-0 ceph-mon[74243]: 8.b scrub ok
Oct 11 04:31:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:04 compute-0 sudo[113466]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:04 compute-0 ceph-mon[74243]: pgmap v259: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:05 compute-0 sudo[113619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hejbjzytbevjnaoqwnznbmdfuazafgab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157064.6728935-114-275559944437712/AnsiballZ_command.py'
Oct 11 04:31:05 compute-0 sudo[113619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:05 compute-0 python3.9[113621]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:31:05 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Oct 11 04:31:05 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:31:05 compute-0 ceph-mon[74243]: 7.6 scrub starts
Oct 11 04:31:05 compute-0 ceph-mon[74243]: 7.6 scrub ok
Oct 11 04:31:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:06 compute-0 sudo[113619]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:06 compute-0 sudo[113906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnltzpenaqumisnhetgecfmcfjphlrnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157066.3801503-122-97295015359881/AnsiballZ_file.py'
Oct 11 04:31:06 compute-0 sudo[113906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:07 compute-0 ceph-mon[74243]: pgmap v260: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:07 compute-0 python3.9[113908]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 11 04:31:07 compute-0 sudo[113906]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:07 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Oct 11 04:31:07 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Oct 11 04:31:07 compute-0 python3.9[114058]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:31:08 compute-0 ceph-mon[74243]: 10.2 scrub starts
Oct 11 04:31:08 compute-0 ceph-mon[74243]: 10.2 scrub ok
Oct 11 04:31:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:08 compute-0 sudo[114210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqejyrhhcitemsizcqhimlxeiodrslau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157068.1896133-138-14422742827520/AnsiballZ_dnf.py'
Oct 11 04:31:08 compute-0 sudo[114210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:08 compute-0 python3.9[114212]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:31:09 compute-0 ceph-mon[74243]: pgmap v261: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:09 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Oct 11 04:31:09 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Oct 11 04:31:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:09 compute-0 sudo[114210]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:10 compute-0 ceph-mon[74243]: 2.5 scrub starts
Oct 11 04:31:10 compute-0 ceph-mon[74243]: 2.5 scrub ok
Oct 11 04:31:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:10 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.a scrub starts
Oct 11 04:31:10 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.a scrub ok
Oct 11 04:31:10 compute-0 sudo[114363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mooerppcrypuxvjczeukrjpwugnflrld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157070.217262-147-11048698066142/AnsiballZ_dnf.py'
Oct 11 04:31:10 compute-0 sudo[114363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:10 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Oct 11 04:31:10 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Oct 11 04:31:10 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Oct 11 04:31:10 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Oct 11 04:31:10 compute-0 python3.9[114365]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:31:11 compute-0 ceph-mon[74243]: pgmap v262: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:11 compute-0 ceph-mon[74243]: 2.a scrub starts
Oct 11 04:31:11 compute-0 ceph-mon[74243]: 2.a scrub ok
Oct 11 04:31:11 compute-0 ceph-mon[74243]: 8.9 scrub starts
Oct 11 04:31:11 compute-0 ceph-mon[74243]: 8.9 scrub ok
Oct 11 04:31:11 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct 11 04:31:11 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct 11 04:31:11 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.e scrub starts
Oct 11 04:31:11 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.e scrub ok
Oct 11 04:31:11 compute-0 sudo[114363]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:12 compute-0 ceph-mon[74243]: 7.11 scrub starts
Oct 11 04:31:12 compute-0 ceph-mon[74243]: 7.11 scrub ok
Oct 11 04:31:12 compute-0 ceph-mon[74243]: 5.c scrub starts
Oct 11 04:31:12 compute-0 ceph-mon[74243]: 5.c scrub ok
Oct 11 04:31:12 compute-0 ceph-mon[74243]: 8.e scrub starts
Oct 11 04:31:12 compute-0 ceph-mon[74243]: 8.e scrub ok
Oct 11 04:31:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:12 compute-0 sudo[114516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifqsclcwrxptbpouwoabgfiifhbbuqyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157072.2460687-159-241729247292328/AnsiballZ_stat.py'
Oct 11 04:31:12 compute-0 sudo[114516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:12 compute-0 python3.9[114518]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:31:12 compute-0 sudo[114516]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:13 compute-0 ceph-mon[74243]: pgmap v263: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:13 compute-0 sudo[114670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qznkxqaxoiljhookozwrofnqrirgauxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157073.0258317-167-22834887382368/AnsiballZ_slurp.py'
Oct 11 04:31:13 compute-0 sudo[114670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:13 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Oct 11 04:31:13 compute-0 python3.9[114672]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct 11 04:31:13 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Oct 11 04:31:13 compute-0 sudo[114670]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:14 compute-0 sshd-session[112006]: Connection closed by 192.168.122.30 port 55214
Oct 11 04:31:14 compute-0 sshd-session[112003]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:31:14 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Oct 11 04:31:14 compute-0 systemd[1]: session-37.scope: Consumed 18.906s CPU time.
Oct 11 04:31:14 compute-0 systemd-logind[801]: Session 37 logged out. Waiting for processes to exit.
Oct 11 04:31:14 compute-0 systemd-logind[801]: Removed session 37.
Oct 11 04:31:14 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct 11 04:31:14 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct 11 04:31:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:15 compute-0 ceph-mon[74243]: 3.16 deep-scrub starts
Oct 11 04:31:15 compute-0 ceph-mon[74243]: 3.16 deep-scrub ok
Oct 11 04:31:15 compute-0 ceph-mon[74243]: pgmap v264: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:15 compute-0 ceph-mon[74243]: 11.1 scrub starts
Oct 11 04:31:15 compute-0 ceph-mon[74243]: 11.1 scrub ok
Oct 11 04:31:15 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct 11 04:31:15 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct 11 04:31:15 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Oct 11 04:31:15 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Oct 11 04:31:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:16 compute-0 ceph-mon[74243]: 2.9 scrub starts
Oct 11 04:31:16 compute-0 ceph-mon[74243]: 2.9 scrub ok
Oct 11 04:31:16 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Oct 11 04:31:16 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Oct 11 04:31:17 compute-0 ceph-mon[74243]: 8.1c scrub starts
Oct 11 04:31:17 compute-0 ceph-mon[74243]: 8.1c scrub ok
Oct 11 04:31:17 compute-0 ceph-mon[74243]: pgmap v265: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:17 compute-0 ceph-mon[74243]: 4.14 scrub starts
Oct 11 04:31:17 compute-0 ceph-mon[74243]: 4.14 scrub ok
Oct 11 04:31:17 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct 11 04:31:17 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct 11 04:31:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:18 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Oct 11 04:31:18 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Oct 11 04:31:18 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Oct 11 04:31:18 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Oct 11 04:31:19 compute-0 ceph-mon[74243]: 11.12 scrub starts
Oct 11 04:31:19 compute-0 ceph-mon[74243]: 11.12 scrub ok
Oct 11 04:31:19 compute-0 ceph-mon[74243]: pgmap v266: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:19 compute-0 ceph-mon[74243]: 4.12 scrub starts
Oct 11 04:31:19 compute-0 ceph-mon[74243]: 4.12 scrub ok
Oct 11 04:31:19 compute-0 ceph-mon[74243]: 11.4 scrub starts
Oct 11 04:31:19 compute-0 ceph-mon[74243]: 11.4 scrub ok
Oct 11 04:31:19 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct 11 04:31:19 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct 11 04:31:19 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.c scrub starts
Oct 11 04:31:19 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.c scrub ok
Oct 11 04:31:19 compute-0 sshd-session[114698]: Accepted publickey for zuul from 192.168.122.30 port 37134 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:31:19 compute-0 systemd-logind[801]: New session 38 of user zuul.
Oct 11 04:31:19 compute-0 systemd[1]: Started Session 38 of User zuul.
Oct 11 04:31:19 compute-0 sshd-session[114698]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:31:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:20 compute-0 ceph-mon[74243]: 4.f scrub starts
Oct 11 04:31:20 compute-0 ceph-mon[74243]: 4.f scrub ok
Oct 11 04:31:20 compute-0 ceph-mon[74243]: 3.c scrub starts
Oct 11 04:31:20 compute-0 ceph-mon[74243]: 3.c scrub ok
Oct 11 04:31:20 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct 11 04:31:20 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct 11 04:31:20 compute-0 python3.9[114851]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:31:21 compute-0 ceph-mon[74243]: pgmap v267: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:21 compute-0 ceph-mon[74243]: 7.9 scrub starts
Oct 11 04:31:21 compute-0 ceph-mon[74243]: 7.9 scrub ok
Oct 11 04:31:21 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Oct 11 04:31:21 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Oct 11 04:31:21 compute-0 python3.9[115005]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:31:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:22 compute-0 ceph-mon[74243]: 8.6 scrub starts
Oct 11 04:31:22 compute-0 ceph-mon[74243]: 8.6 scrub ok
Oct 11 04:31:23 compute-0 ceph-mon[74243]: pgmap v268: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:23 compute-0 python3.9[115198]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:31:23 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct 11 04:31:23 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct 11 04:31:23 compute-0 sshd-session[114701]: Connection closed by 192.168.122.30 port 37134
Oct 11 04:31:23 compute-0 sshd-session[114698]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:31:23 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Oct 11 04:31:23 compute-0 systemd[1]: session-38.scope: Consumed 2.829s CPU time.
Oct 11 04:31:23 compute-0 systemd-logind[801]: Session 38 logged out. Waiting for processes to exit.
Oct 11 04:31:23 compute-0 systemd-logind[801]: Removed session 38.
Oct 11 04:31:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:25 compute-0 sudo[115224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:31:25 compute-0 sudo[115224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:31:25 compute-0 sudo[115224]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:25 compute-0 ceph-mon[74243]: 7.15 scrub starts
Oct 11 04:31:25 compute-0 ceph-mon[74243]: 7.15 scrub ok
Oct 11 04:31:25 compute-0 ceph-mon[74243]: pgmap v269: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:25 compute-0 sudo[115249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:31:25 compute-0 sudo[115249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:31:25 compute-0 sudo[115249]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:25 compute-0 sudo[115274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:31:25 compute-0 sudo[115274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:31:25 compute-0 sudo[115274]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:25 compute-0 sudo[115299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:31:25 compute-0 sudo[115299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:31:25 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct 11 04:31:25 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct 11 04:31:25 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct 11 04:31:25 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct 11 04:31:25 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Oct 11 04:31:25 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Oct 11 04:31:26 compute-0 sudo[115299]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:31:26 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:31:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:31:26 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:31:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:31:26 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:31:26 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 56167491-7d42-41de-9823-607ab6d5d89c does not exist
Oct 11 04:31:26 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 2df74a7d-7d81-40a8-871b-65787cbc34d6 does not exist
Oct 11 04:31:26 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 0758ada5-aab2-414b-b666-8198bdb8fc23 does not exist
Oct 11 04:31:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:31:26 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:31:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:31:26 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:31:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:31:26 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:31:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:31:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:31:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:31:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:31:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:31:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:31:26 compute-0 sudo[115355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:31:26 compute-0 sudo[115355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:31:26 compute-0 sudo[115355]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:26 compute-0 sudo[115380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:31:26 compute-0 sudo[115380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:31:26 compute-0 sudo[115380]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:26 compute-0 ceph-mon[74243]: 11.6 scrub starts
Oct 11 04:31:26 compute-0 ceph-mon[74243]: 11.6 scrub ok
Oct 11 04:31:26 compute-0 ceph-mon[74243]: 4.10 scrub starts
Oct 11 04:31:26 compute-0 ceph-mon[74243]: 4.10 scrub ok
Oct 11 04:31:26 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:31:26 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:31:26 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:31:26 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:31:26 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:31:26 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:31:26 compute-0 sudo[115405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:31:26 compute-0 sudo[115405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:31:26 compute-0 sudo[115405]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:26 compute-0 sudo[115430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:31:26 compute-0 sudo[115430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:31:26 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct 11 04:31:26 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct 11 04:31:26 compute-0 podman[115496]: 2025-10-11 04:31:26.898065617 +0000 UTC m=+0.070022116 container create 5148809076bdb57a96b39b2da6d04e8828107699d85b6aab94618b6ad119a2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:31:26 compute-0 systemd[75878]: Created slice User Background Tasks Slice.
Oct 11 04:31:26 compute-0 systemd[75878]: Starting Cleanup of User's Temporary Files and Directories...
Oct 11 04:31:26 compute-0 systemd[1]: Started libpod-conmon-5148809076bdb57a96b39b2da6d04e8828107699d85b6aab94618b6ad119a2e4.scope.
Oct 11 04:31:26 compute-0 systemd[75878]: Finished Cleanup of User's Temporary Files and Directories.
Oct 11 04:31:26 compute-0 podman[115496]: 2025-10-11 04:31:26.869771958 +0000 UTC m=+0.041728537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:31:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:31:27 compute-0 podman[115496]: 2025-10-11 04:31:27.004905134 +0000 UTC m=+0.176861633 container init 5148809076bdb57a96b39b2da6d04e8828107699d85b6aab94618b6ad119a2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lehmann, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:31:27 compute-0 podman[115496]: 2025-10-11 04:31:27.013425567 +0000 UTC m=+0.185382096 container start 5148809076bdb57a96b39b2da6d04e8828107699d85b6aab94618b6ad119a2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lehmann, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:31:27 compute-0 podman[115496]: 2025-10-11 04:31:27.016866114 +0000 UTC m=+0.188822593 container attach 5148809076bdb57a96b39b2da6d04e8828107699d85b6aab94618b6ad119a2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lehmann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:31:27 compute-0 systemd[1]: libpod-5148809076bdb57a96b39b2da6d04e8828107699d85b6aab94618b6ad119a2e4.scope: Deactivated successfully.
Oct 11 04:31:27 compute-0 jolly_lehmann[115513]: 167 167
Oct 11 04:31:27 compute-0 conmon[115513]: conmon 5148809076bdb57a96b3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5148809076bdb57a96b39b2da6d04e8828107699d85b6aab94618b6ad119a2e4.scope/container/memory.events
Oct 11 04:31:27 compute-0 podman[115496]: 2025-10-11 04:31:27.02389159 +0000 UTC m=+0.195848119 container died 5148809076bdb57a96b39b2da6d04e8828107699d85b6aab94618b6ad119a2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:31:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-be250a55d612a14f82676c07dd3cf7541044a4f2dd4e1e10b041ea34f77795a3-merged.mount: Deactivated successfully.
Oct 11 04:31:27 compute-0 podman[115496]: 2025-10-11 04:31:27.066712403 +0000 UTC m=+0.238668902 container remove 5148809076bdb57a96b39b2da6d04e8828107699d85b6aab94618b6ad119a2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lehmann, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:31:27 compute-0 systemd[1]: libpod-conmon-5148809076bdb57a96b39b2da6d04e8828107699d85b6aab94618b6ad119a2e4.scope: Deactivated successfully.
Oct 11 04:31:27 compute-0 podman[115536]: 2025-10-11 04:31:27.263465834 +0000 UTC m=+0.056858446 container create 617b2014fd9140af77c384788e00aa4cbb9c83ec1bc7db4134b8ee6ef0d74277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:31:27 compute-0 ceph-mon[74243]: 11.1f scrub starts
Oct 11 04:31:27 compute-0 ceph-mon[74243]: 11.1f scrub ok
Oct 11 04:31:27 compute-0 ceph-mon[74243]: pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:27 compute-0 ceph-mon[74243]: 3.f scrub starts
Oct 11 04:31:27 compute-0 ceph-mon[74243]: 3.f scrub ok
Oct 11 04:31:27 compute-0 systemd[1]: Started libpod-conmon-617b2014fd9140af77c384788e00aa4cbb9c83ec1bc7db4134b8ee6ef0d74277.scope.
Oct 11 04:31:27 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:31:27 compute-0 podman[115536]: 2025-10-11 04:31:27.242995021 +0000 UTC m=+0.036387623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c76e9b5ee866733591d1c3b0fb781a17307a8b6517258332a2988ce65d18274/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c76e9b5ee866733591d1c3b0fb781a17307a8b6517258332a2988ce65d18274/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c76e9b5ee866733591d1c3b0fb781a17307a8b6517258332a2988ce65d18274/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c76e9b5ee866733591d1c3b0fb781a17307a8b6517258332a2988ce65d18274/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c76e9b5ee866733591d1c3b0fb781a17307a8b6517258332a2988ce65d18274/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:31:27 compute-0 podman[115536]: 2025-10-11 04:31:27.367521462 +0000 UTC m=+0.160914084 container init 617b2014fd9140af77c384788e00aa4cbb9c83ec1bc7db4134b8ee6ef0d74277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_dhawan, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:31:27 compute-0 podman[115536]: 2025-10-11 04:31:27.383308008 +0000 UTC m=+0.176700620 container start 617b2014fd9140af77c384788e00aa4cbb9c83ec1bc7db4134b8ee6ef0d74277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:31:27 compute-0 podman[115536]: 2025-10-11 04:31:27.390666793 +0000 UTC m=+0.184059465 container attach 617b2014fd9140af77c384788e00aa4cbb9c83ec1bc7db4134b8ee6ef0d74277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:31:27 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 6.8 deep-scrub starts
Oct 11 04:31:27 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 6.8 deep-scrub ok
Oct 11 04:31:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:28 compute-0 agitated_dhawan[115553]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:31:28 compute-0 agitated_dhawan[115553]: --> relative data size: 1.0
Oct 11 04:31:28 compute-0 agitated_dhawan[115553]: --> All data devices are unavailable
Oct 11 04:31:28 compute-0 systemd[1]: libpod-617b2014fd9140af77c384788e00aa4cbb9c83ec1bc7db4134b8ee6ef0d74277.scope: Deactivated successfully.
Oct 11 04:31:28 compute-0 systemd[1]: libpod-617b2014fd9140af77c384788e00aa4cbb9c83ec1bc7db4134b8ee6ef0d74277.scope: Consumed 1.073s CPU time.
Oct 11 04:31:28 compute-0 podman[115536]: 2025-10-11 04:31:28.499506305 +0000 UTC m=+1.292898917 container died 617b2014fd9140af77c384788e00aa4cbb9c83ec1bc7db4134b8ee6ef0d74277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_dhawan, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:31:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c76e9b5ee866733591d1c3b0fb781a17307a8b6517258332a2988ce65d18274-merged.mount: Deactivated successfully.
Oct 11 04:31:28 compute-0 podman[115536]: 2025-10-11 04:31:28.577803877 +0000 UTC m=+1.371196459 container remove 617b2014fd9140af77c384788e00aa4cbb9c83ec1bc7db4134b8ee6ef0d74277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_dhawan, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:31:28 compute-0 sshd-session[115582]: Accepted publickey for zuul from 192.168.122.30 port 33318 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:31:28 compute-0 systemd[1]: libpod-conmon-617b2014fd9140af77c384788e00aa4cbb9c83ec1bc7db4134b8ee6ef0d74277.scope: Deactivated successfully.
Oct 11 04:31:28 compute-0 systemd-logind[801]: New session 39 of user zuul.
Oct 11 04:31:28 compute-0 systemd[1]: Started Session 39 of User zuul.
Oct 11 04:31:28 compute-0 sudo[115430]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:28 compute-0 sshd-session[115582]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:31:28 compute-0 sudo[115599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:31:28 compute-0 sudo[115599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:31:28 compute-0 sudo[115599]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:28 compute-0 sudo[115641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:31:28 compute-0 sudo[115641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:31:28 compute-0 sudo[115641]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:28 compute-0 sudo[115692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:31:28 compute-0 sudo[115692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:31:28 compute-0 sudo[115692]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:28 compute-0 sudo[115726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:31:28 compute-0 sudo[115726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:31:29 compute-0 ceph-mon[74243]: 6.8 deep-scrub starts
Oct 11 04:31:29 compute-0 ceph-mon[74243]: 6.8 deep-scrub ok
Oct 11 04:31:29 compute-0 ceph-mon[74243]: pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:29 compute-0 podman[115817]: 2025-10-11 04:31:29.337141389 +0000 UTC m=+0.057434210 container create c40c821aae196da41516b1e04cf77a7539fa6733aa6a89eaebe8607ba03c45a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kare, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 04:31:29 compute-0 systemd[1]: Started libpod-conmon-c40c821aae196da41516b1e04cf77a7539fa6733aa6a89eaebe8607ba03c45a9.scope.
Oct 11 04:31:29 compute-0 podman[115817]: 2025-10-11 04:31:29.313982949 +0000 UTC m=+0.034275760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:31:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:31:29 compute-0 podman[115817]: 2025-10-11 04:31:29.432694234 +0000 UTC m=+0.152987035 container init c40c821aae196da41516b1e04cf77a7539fa6733aa6a89eaebe8607ba03c45a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:31:29 compute-0 podman[115817]: 2025-10-11 04:31:29.447388472 +0000 UTC m=+0.167681303 container start c40c821aae196da41516b1e04cf77a7539fa6733aa6a89eaebe8607ba03c45a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:31:29 compute-0 podman[115817]: 2025-10-11 04:31:29.451919886 +0000 UTC m=+0.172212687 container attach c40c821aae196da41516b1e04cf77a7539fa6733aa6a89eaebe8607ba03c45a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:31:29 compute-0 silly_kare[115854]: 167 167
Oct 11 04:31:29 compute-0 systemd[1]: libpod-c40c821aae196da41516b1e04cf77a7539fa6733aa6a89eaebe8607ba03c45a9.scope: Deactivated successfully.
Oct 11 04:31:29 compute-0 podman[115817]: 2025-10-11 04:31:29.45806579 +0000 UTC m=+0.178358631 container died c40c821aae196da41516b1e04cf77a7539fa6733aa6a89eaebe8607ba03c45a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kare, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 04:31:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3ad61b8d27fc8229ebf79a2e116fbe9759bb494ad5f0850fa256b061c92d8dd-merged.mount: Deactivated successfully.
Oct 11 04:31:29 compute-0 podman[115817]: 2025-10-11 04:31:29.510766591 +0000 UTC m=+0.231059392 container remove c40c821aae196da41516b1e04cf77a7539fa6733aa6a89eaebe8607ba03c45a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:31:29 compute-0 systemd[1]: libpod-conmon-c40c821aae196da41516b1e04cf77a7539fa6733aa6a89eaebe8607ba03c45a9.scope: Deactivated successfully.
Oct 11 04:31:29 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct 11 04:31:29 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct 11 04:31:29 compute-0 podman[115928]: 2025-10-11 04:31:29.753046903 +0000 UTC m=+0.085076443 container create bb91ca5f047eabb797eb06d238daf4c6d2c9a7af1f236e51c815298e7585fe28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:31:29 compute-0 systemd[1]: Started libpod-conmon-bb91ca5f047eabb797eb06d238daf4c6d2c9a7af1f236e51c815298e7585fe28.scope.
Oct 11 04:31:29 compute-0 podman[115928]: 2025-10-11 04:31:29.723018161 +0000 UTC m=+0.055047761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:31:29 compute-0 python3.9[115921]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:31:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3f1017bdac825b04d14c8947c715a6dd5f3fede8ade178bcad1c77e20103a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3f1017bdac825b04d14c8947c715a6dd5f3fede8ade178bcad1c77e20103a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3f1017bdac825b04d14c8947c715a6dd5f3fede8ade178bcad1c77e20103a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3f1017bdac825b04d14c8947c715a6dd5f3fede8ade178bcad1c77e20103a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:31:29 compute-0 podman[115928]: 2025-10-11 04:31:29.861013509 +0000 UTC m=+0.193043099 container init bb91ca5f047eabb797eb06d238daf4c6d2c9a7af1f236e51c815298e7585fe28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dirac, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:31:29 compute-0 podman[115928]: 2025-10-11 04:31:29.876870497 +0000 UTC m=+0.208900007 container start bb91ca5f047eabb797eb06d238daf4c6d2c9a7af1f236e51c815298e7585fe28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:31:29 compute-0 podman[115928]: 2025-10-11 04:31:29.880737404 +0000 UTC m=+0.212766914 container attach bb91ca5f047eabb797eb06d238daf4c6d2c9a7af1f236e51c815298e7585fe28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dirac, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:31:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:30 compute-0 ceph-mon[74243]: 4.d scrub starts
Oct 11 04:31:30 compute-0 ceph-mon[74243]: 4.d scrub ok
Oct 11 04:31:30 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Oct 11 04:31:30 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Oct 11 04:31:30 compute-0 awesome_dirac[115944]: {
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:     "0": [
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:         {
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "devices": [
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "/dev/loop3"
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             ],
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "lv_name": "ceph_lv0",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "lv_size": "21470642176",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "name": "ceph_lv0",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "tags": {
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.cluster_name": "ceph",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.crush_device_class": "",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.encrypted": "0",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.osd_id": "0",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.type": "block",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.vdo": "0"
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             },
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "type": "block",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "vg_name": "ceph_vg0"
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:         }
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:     ],
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:     "1": [
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:         {
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "devices": [
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "/dev/loop4"
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             ],
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "lv_name": "ceph_lv1",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "lv_size": "21470642176",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "name": "ceph_lv1",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "tags": {
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.cluster_name": "ceph",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.crush_device_class": "",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.encrypted": "0",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.osd_id": "1",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.type": "block",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.vdo": "0"
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             },
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "type": "block",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "vg_name": "ceph_vg1"
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:         }
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:     ],
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:     "2": [
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:         {
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "devices": [
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "/dev/loop5"
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             ],
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "lv_name": "ceph_lv2",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "lv_size": "21470642176",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "name": "ceph_lv2",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "tags": {
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.cluster_name": "ceph",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.crush_device_class": "",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.encrypted": "0",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.osd_id": "2",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.type": "block",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:                 "ceph.vdo": "0"
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             },
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "type": "block",
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:             "vg_name": "ceph_vg2"
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:         }
Oct 11 04:31:30 compute-0 awesome_dirac[115944]:     ]
Oct 11 04:31:30 compute-0 awesome_dirac[115944]: }
Oct 11 04:31:30 compute-0 systemd[1]: libpod-bb91ca5f047eabb797eb06d238daf4c6d2c9a7af1f236e51c815298e7585fe28.scope: Deactivated successfully.
Oct 11 04:31:30 compute-0 podman[115928]: 2025-10-11 04:31:30.743836905 +0000 UTC m=+1.075866445 container died bb91ca5f047eabb797eb06d238daf4c6d2c9a7af1f236e51c815298e7585fe28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:31:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d3f1017bdac825b04d14c8947c715a6dd5f3fede8ade178bcad1c77e20103a6-merged.mount: Deactivated successfully.
Oct 11 04:31:30 compute-0 podman[115928]: 2025-10-11 04:31:30.813534022 +0000 UTC m=+1.145563542 container remove bb91ca5f047eabb797eb06d238daf4c6d2c9a7af1f236e51c815298e7585fe28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:31:30 compute-0 systemd[1]: libpod-conmon-bb91ca5f047eabb797eb06d238daf4c6d2c9a7af1f236e51c815298e7585fe28.scope: Deactivated successfully.
Oct 11 04:31:30 compute-0 sudo[115726]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:30 compute-0 python3.9[116102]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:31:30 compute-0 sudo[116123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:31:30 compute-0 sudo[116123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:31:30 compute-0 sudo[116123]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:31 compute-0 sudo[116151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:31:31 compute-0 sudo[116151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:31:31 compute-0 sudo[116151]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:31 compute-0 sudo[116176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:31:31 compute-0 sudo[116176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:31:31 compute-0 sudo[116176]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:31 compute-0 sudo[116214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:31:31 compute-0 sudo[116214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:31:31 compute-0 ceph-mon[74243]: pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:31 compute-0 ceph-mon[74243]: 6.1 scrub starts
Oct 11 04:31:31 compute-0 ceph-mon[74243]: 6.1 scrub ok
Oct 11 04:31:31 compute-0 podman[116389]: 2025-10-11 04:31:31.589812139 +0000 UTC m=+0.067384940 container create 9c3660afff5ae722348171656cf9eeee12d4fe6c826b44797b4d65181b1076d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 04:31:31 compute-0 sudo[116429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpsrchfaadlxznqbazwjqvomvwqznltf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157091.2237308-40-120978671887121/AnsiballZ_setup.py'
Oct 11 04:31:31 compute-0 sudo[116429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:31 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct 11 04:31:31 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct 11 04:31:31 compute-0 systemd[1]: Started libpod-conmon-9c3660afff5ae722348171656cf9eeee12d4fe6c826b44797b4d65181b1076d9.scope.
Oct 11 04:31:31 compute-0 podman[116389]: 2025-10-11 04:31:31.561740025 +0000 UTC m=+0.039312886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:31:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:31:31 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.e deep-scrub starts
Oct 11 04:31:31 compute-0 podman[116389]: 2025-10-11 04:31:31.707240062 +0000 UTC m=+0.184812913 container init 9c3660afff5ae722348171656cf9eeee12d4fe6c826b44797b4d65181b1076d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ptolemy, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:31:31 compute-0 podman[116389]: 2025-10-11 04:31:31.71754684 +0000 UTC m=+0.195119611 container start 9c3660afff5ae722348171656cf9eeee12d4fe6c826b44797b4d65181b1076d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 04:31:31 compute-0 podman[116389]: 2025-10-11 04:31:31.720450503 +0000 UTC m=+0.198023294 container attach 9c3660afff5ae722348171656cf9eeee12d4fe6c826b44797b4d65181b1076d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ptolemy, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:31:31 compute-0 crazy_ptolemy[116434]: 167 167
Oct 11 04:31:31 compute-0 systemd[1]: libpod-9c3660afff5ae722348171656cf9eeee12d4fe6c826b44797b4d65181b1076d9.scope: Deactivated successfully.
Oct 11 04:31:31 compute-0 podman[116389]: 2025-10-11 04:31:31.725179641 +0000 UTC m=+0.202752452 container died 9c3660afff5ae722348171656cf9eeee12d4fe6c826b44797b4d65181b1076d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ptolemy, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:31:31 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.e deep-scrub ok
Oct 11 04:31:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bcd873fa032b77b981915220f009107b288f960aafdefa92855c32f8cebdb5a-merged.mount: Deactivated successfully.
Oct 11 04:31:31 compute-0 podman[116389]: 2025-10-11 04:31:31.772512038 +0000 UTC m=+0.250084809 container remove 9c3660afff5ae722348171656cf9eeee12d4fe6c826b44797b4d65181b1076d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ptolemy, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:31:31 compute-0 systemd[1]: libpod-conmon-9c3660afff5ae722348171656cf9eeee12d4fe6c826b44797b4d65181b1076d9.scope: Deactivated successfully.
Oct 11 04:31:31 compute-0 python3.9[116433]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:31:31 compute-0 podman[116459]: 2025-10-11 04:31:31.996486261 +0000 UTC m=+0.057752928 container create 957b86625155e14e61cdca9eacc00af3302f5d513566853e779a2d85f9a0f620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:31:32 compute-0 systemd[1]: Started libpod-conmon-957b86625155e14e61cdca9eacc00af3302f5d513566853e779a2d85f9a0f620.scope.
Oct 11 04:31:32 compute-0 podman[116459]: 2025-10-11 04:31:31.974129651 +0000 UTC m=+0.035396328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:31:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:31:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c98f411c11b43ecaef3b6f3f32aa5133fe8a207159ea2a61a6e33f6ae21ead0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:31:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c98f411c11b43ecaef3b6f3f32aa5133fe8a207159ea2a61a6e33f6ae21ead0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:31:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c98f411c11b43ecaef3b6f3f32aa5133fe8a207159ea2a61a6e33f6ae21ead0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:31:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c98f411c11b43ecaef3b6f3f32aa5133fe8a207159ea2a61a6e33f6ae21ead0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:31:32 compute-0 podman[116459]: 2025-10-11 04:31:32.092648262 +0000 UTC m=+0.153914929 container init 957b86625155e14e61cdca9eacc00af3302f5d513566853e779a2d85f9a0f620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:31:32 compute-0 podman[116459]: 2025-10-11 04:31:32.107629947 +0000 UTC m=+0.168896624 container start 957b86625155e14e61cdca9eacc00af3302f5d513566853e779a2d85f9a0f620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shamir, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:31:32 compute-0 podman[116459]: 2025-10-11 04:31:32.111281779 +0000 UTC m=+0.172548486 container attach 957b86625155e14e61cdca9eacc00af3302f5d513566853e779a2d85f9a0f620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shamir, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:31:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:32 compute-0 sudo[116429]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:32 compute-0 ceph-mon[74243]: 4.9 scrub starts
Oct 11 04:31:32 compute-0 ceph-mon[74243]: 4.9 scrub ok
Oct 11 04:31:32 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct 11 04:31:32 compute-0 sudo[116559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxegapufuaymndbmzfhhithegdcaonbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157091.2237308-40-120978671887121/AnsiballZ_dnf.py'
Oct 11 04:31:32 compute-0 sudo[116559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:32 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct 11 04:31:32 compute-0 python3.9[116561]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]: {
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:         "osd_id": 1,
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:         "type": "bluestore"
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:     },
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:         "osd_id": 0,
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:         "type": "bluestore"
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:     },
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:         "osd_id": 2,
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:         "type": "bluestore"
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]:     }
Oct 11 04:31:33 compute-0 pedantic_shamir[116479]: }
Oct 11 04:31:33 compute-0 systemd[1]: libpod-957b86625155e14e61cdca9eacc00af3302f5d513566853e779a2d85f9a0f620.scope: Deactivated successfully.
Oct 11 04:31:33 compute-0 systemd[1]: libpod-957b86625155e14e61cdca9eacc00af3302f5d513566853e779a2d85f9a0f620.scope: Consumed 1.128s CPU time.
Oct 11 04:31:33 compute-0 podman[116459]: 2025-10-11 04:31:33.230131281 +0000 UTC m=+1.291397978 container died 957b86625155e14e61cdca9eacc00af3302f5d513566853e779a2d85f9a0f620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:31:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c98f411c11b43ecaef3b6f3f32aa5133fe8a207159ea2a61a6e33f6ae21ead0-merged.mount: Deactivated successfully.
Oct 11 04:31:33 compute-0 podman[116459]: 2025-10-11 04:31:33.325773738 +0000 UTC m=+1.387040395 container remove 957b86625155e14e61cdca9eacc00af3302f5d513566853e779a2d85f9a0f620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shamir, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 04:31:33 compute-0 systemd[1]: libpod-conmon-957b86625155e14e61cdca9eacc00af3302f5d513566853e779a2d85f9a0f620.scope: Deactivated successfully.
Oct 11 04:31:33 compute-0 ceph-mon[74243]: 9.e deep-scrub starts
Oct 11 04:31:33 compute-0 ceph-mon[74243]: 9.e deep-scrub ok
Oct 11 04:31:33 compute-0 ceph-mon[74243]: pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:33 compute-0 ceph-mon[74243]: 4.5 scrub starts
Oct 11 04:31:33 compute-0 ceph-mon[74243]: 4.5 scrub ok
Oct 11 04:31:33 compute-0 sudo[116214]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:33 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:31:33 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:31:33 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:31:33 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:31:33 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev a10a312e-6c81-4578-8d11-75d82dca6cc8 does not exist
Oct 11 04:31:33 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev e853e2ce-d3f1-4f22-8386-21655107c407 does not exist
Oct 11 04:31:33 compute-0 sudo[116605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:31:33 compute-0 sudo[116605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:31:33 compute-0 sudo[116605]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:33 compute-0 sudo[116630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:31:33 compute-0 sudo[116630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:31:33 compute-0 sudo[116630]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:33 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct 11 04:31:33 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct 11 04:31:33 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Oct 11 04:31:33 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Oct 11 04:31:33 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Oct 11 04:31:33 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Oct 11 04:31:34 compute-0 sudo[116559]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:34 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:31:34 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:31:34 compute-0 ceph-mon[74243]: 4.7 scrub starts
Oct 11 04:31:34 compute-0 ceph-mon[74243]: 4.7 scrub ok
Oct 11 04:31:34 compute-0 ceph-mon[74243]: 8.1a scrub starts
Oct 11 04:31:34 compute-0 ceph-mon[74243]: 9.6 scrub starts
Oct 11 04:31:34 compute-0 ceph-mon[74243]: 8.1a scrub ok
Oct 11 04:31:34 compute-0 ceph-mon[74243]: 9.6 scrub ok
Oct 11 04:31:34 compute-0 sudo[116805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsefxvtbssfllcjlayqokmbtpmbvtylk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157094.2636435-52-136546418493209/AnsiballZ_setup.py'
Oct 11 04:31:34 compute-0 sudo[116805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:34 compute-0 python3.9[116807]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:31:35 compute-0 sudo[116805]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:35 compute-0 ceph-mon[74243]: pgmap v274: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:36 compute-0 sudo[117000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmynewtiamqqpkuscffjoyutzzyvwyws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157095.4868464-63-123833710926397/AnsiballZ_file.py'
Oct 11 04:31:36 compute-0 sudo[117000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:36 compute-0 python3.9[117002]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:31:36 compute-0 sudo[117000]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:36 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Oct 11 04:31:36 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Oct 11 04:31:36 compute-0 sudo[117152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljtdalzknefdemeyyjhpyeliciatwlcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157096.450551-71-232066979719349/AnsiballZ_command.py'
Oct 11 04:31:36 compute-0 sudo[117152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:37 compute-0 python3.9[117154]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:31:37 compute-0 sudo[117152]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:37 compute-0 ceph-mon[74243]: pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:37 compute-0 ceph-mon[74243]: 3.12 scrub starts
Oct 11 04:31:37 compute-0 ceph-mon[74243]: 3.12 scrub ok
Oct 11 04:31:37 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Oct 11 04:31:37 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Oct 11 04:31:37 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Oct 11 04:31:37 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Oct 11 04:31:37 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Oct 11 04:31:37 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Oct 11 04:31:37 compute-0 sudo[117316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjmmjbejsvlugijzfxoybrrisrlqxecs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157097.417312-79-234556017575073/AnsiballZ_stat.py'
Oct 11 04:31:37 compute-0 sudo[117316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:38 compute-0 python3.9[117318]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:31:38 compute-0 sudo[117316]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:38 compute-0 sudo[117394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akfyretersjnffvyindbsjsreqwcdnfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157097.417312-79-234556017575073/AnsiballZ_file.py'
Oct 11 04:31:38 compute-0 sudo[117394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:38 compute-0 ceph-mon[74243]: 4.8 scrub starts
Oct 11 04:31:38 compute-0 ceph-mon[74243]: 4.8 scrub ok
Oct 11 04:31:38 compute-0 ceph-mon[74243]: 9.17 scrub starts
Oct 11 04:31:38 compute-0 ceph-mon[74243]: 8.1f scrub starts
Oct 11 04:31:38 compute-0 ceph-mon[74243]: 9.17 scrub ok
Oct 11 04:31:38 compute-0 ceph-mon[74243]: 8.1f scrub ok
Oct 11 04:31:38 compute-0 ceph-mon[74243]: pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:38 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.f scrub starts
Oct 11 04:31:38 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.f scrub ok
Oct 11 04:31:38 compute-0 python3.9[117396]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:31:38 compute-0 sudo[117394]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:39 compute-0 sudo[117546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvywafwzeuhrieupjkoomwkjfcfrobmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157098.767497-91-248455871635561/AnsiballZ_stat.py'
Oct 11 04:31:39 compute-0 sudo[117546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:39 compute-0 python3.9[117548]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:31:39 compute-0 sudo[117546]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:39 compute-0 ceph-mon[74243]: 10.f scrub starts
Oct 11 04:31:39 compute-0 ceph-mon[74243]: 10.f scrub ok
Oct 11 04:31:39 compute-0 sudo[117624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhuhryyvldggvhwealmvnuxrtvzfgzej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157098.767497-91-248455871635561/AnsiballZ_file.py'
Oct 11 04:31:39 compute-0 sudo[117624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:39 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.f scrub starts
Oct 11 04:31:39 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Oct 11 04:31:39 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Oct 11 04:31:39 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.f scrub ok
Oct 11 04:31:39 compute-0 python3.9[117626]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:31:39 compute-0 sudo[117624]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:40 compute-0 sudo[117776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfofignkcsjqatiakiuhhmtpyyufecjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157100.0138822-104-38112730144290/AnsiballZ_ini_file.py'
Oct 11 04:31:40 compute-0 sudo[117776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:40 compute-0 ceph-mon[74243]: 9.f scrub starts
Oct 11 04:31:40 compute-0 ceph-mon[74243]: 3.15 scrub starts
Oct 11 04:31:40 compute-0 ceph-mon[74243]: 3.15 scrub ok
Oct 11 04:31:40 compute-0 ceph-mon[74243]: 9.f scrub ok
Oct 11 04:31:40 compute-0 ceph-mon[74243]: pgmap v277: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:40 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct 11 04:31:40 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct 11 04:31:40 compute-0 python3.9[117778]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:31:40 compute-0 sudo[117776]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:40 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Oct 11 04:31:40 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Oct 11 04:31:41 compute-0 sudo[117928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aighhwsaddpskkcyhypepxkillzfydhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157100.8572738-104-183120201958512/AnsiballZ_ini_file.py'
Oct 11 04:31:41 compute-0 sudo[117928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:41 compute-0 python3.9[117930]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:31:41 compute-0 sudo[117928]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:41 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Oct 11 04:31:41 compute-0 ceph-mon[74243]: 2.7 scrub starts
Oct 11 04:31:41 compute-0 ceph-mon[74243]: 2.7 scrub ok
Oct 11 04:31:41 compute-0 ceph-mon[74243]: 11.19 scrub starts
Oct 11 04:31:41 compute-0 ceph-mon[74243]: 11.19 scrub ok
Oct 11 04:31:41 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Oct 11 04:31:41 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Oct 11 04:31:41 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Oct 11 04:31:41 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.7 deep-scrub starts
Oct 11 04:31:41 compute-0 sudo[118080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evpnbowbzfduvwtepigyofgrpclgnrag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157101.477907-104-61339300150867/AnsiballZ_ini_file.py'
Oct 11 04:31:41 compute-0 sudo[118080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:41 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.7 deep-scrub ok
Oct 11 04:31:42 compute-0 python3.9[118082]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:31:42 compute-0 sudo[118080]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:42 compute-0 sudo[118232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rocsmldbzfxiqnecxpfmnrircnikrflu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157102.1796248-104-223194138389899/AnsiballZ_ini_file.py'
Oct 11 04:31:42 compute-0 sudo[118232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:42 compute-0 ceph-mon[74243]: 2.6 scrub starts
Oct 11 04:31:42 compute-0 ceph-mon[74243]: 2.6 scrub ok
Oct 11 04:31:42 compute-0 ceph-mon[74243]: 3.9 scrub starts
Oct 11 04:31:42 compute-0 ceph-mon[74243]: 3.9 scrub ok
Oct 11 04:31:42 compute-0 ceph-mon[74243]: 9.7 deep-scrub starts
Oct 11 04:31:42 compute-0 ceph-mon[74243]: 9.7 deep-scrub ok
Oct 11 04:31:42 compute-0 ceph-mon[74243]: pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:42 compute-0 python3.9[118234]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:31:42 compute-0 sudo[118232]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:43 compute-0 sudo[118384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zruyolnsmdchslkadrtodcslumajokkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157102.916886-135-257304458445277/AnsiballZ_dnf.py'
Oct 11 04:31:43 compute-0 sudo[118384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:43 compute-0 python3.9[118386]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:31:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:44 compute-0 sudo[118384]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:44 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Oct 11 04:31:44 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Oct 11 04:31:44 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Oct 11 04:31:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:44 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Oct 11 04:31:45 compute-0 ceph-mon[74243]: pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:45 compute-0 ceph-mon[74243]: 7.13 scrub starts
Oct 11 04:31:45 compute-0 ceph-mon[74243]: 7.13 scrub ok
Oct 11 04:31:45 compute-0 sudo[118537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfdwzuxfrxtsyreelzqkemmwiuothjkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157105.1000626-146-5501952289917/AnsiballZ_setup.py'
Oct 11 04:31:45 compute-0 sudo[118537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:45 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct 11 04:31:45 compute-0 python3.9[118539]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:31:45 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct 11 04:31:45 compute-0 sudo[118537]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:45 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Oct 11 04:31:45 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Oct 11 04:31:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:46 compute-0 ceph-mon[74243]: 9.18 scrub starts
Oct 11 04:31:46 compute-0 ceph-mon[74243]: 9.18 scrub ok
Oct 11 04:31:46 compute-0 ceph-mon[74243]: 3.17 scrub starts
Oct 11 04:31:46 compute-0 ceph-mon[74243]: 3.17 scrub ok
Oct 11 04:31:46 compute-0 sudo[118691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aolygcwawwkegilwrdaatkyjosqwhvqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157105.968674-154-97758270696944/AnsiballZ_stat.py'
Oct 11 04:31:46 compute-0 sudo[118691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:46 compute-0 python3.9[118693]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:31:46 compute-0 sudo[118691]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:47 compute-0 sudo[118843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kioevwqxquwhaqfkyztadulkpqzteheo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157106.7047572-163-139645785962437/AnsiballZ_stat.py'
Oct 11 04:31:47 compute-0 sudo[118843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:47 compute-0 ceph-mon[74243]: 9.8 scrub starts
Oct 11 04:31:47 compute-0 ceph-mon[74243]: 9.8 scrub ok
Oct 11 04:31:47 compute-0 ceph-mon[74243]: pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:47 compute-0 python3.9[118845]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:31:47 compute-0 sudo[118843]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:47 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct 11 04:31:47 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct 11 04:31:47 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.c scrub starts
Oct 11 04:31:47 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.c scrub ok
Oct 11 04:31:48 compute-0 sudo[118995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmhhxbxqutlljjqomtmhxnvkmjndheqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157107.529479-173-271853032072438/AnsiballZ_service_facts.py'
Oct 11 04:31:48 compute-0 sudo[118995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:48 compute-0 ceph-mon[74243]: 5.1 scrub starts
Oct 11 04:31:48 compute-0 ceph-mon[74243]: 5.1 scrub ok
Oct 11 04:31:48 compute-0 python3.9[118997]: ansible-service_facts Invoked
Oct 11 04:31:48 compute-0 network[119014]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 04:31:48 compute-0 network[119015]: 'network-scripts' will be removed from distribution in near future.
Oct 11 04:31:48 compute-0 network[119016]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 04:31:48 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct 11 04:31:48 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct 11 04:31:49 compute-0 ceph-mon[74243]: 9.c scrub starts
Oct 11 04:31:49 compute-0 ceph-mon[74243]: 9.c scrub ok
Oct 11 04:31:49 compute-0 ceph-mon[74243]: pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:49 compute-0 ceph-mon[74243]: 8.18 scrub starts
Oct 11 04:31:49 compute-0 ceph-mon[74243]: 8.18 scrub ok
Oct 11 04:31:49 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Oct 11 04:31:49 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Oct 11 04:31:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct 11 04:31:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct 11 04:31:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:50 compute-0 ceph-mon[74243]: 10.11 scrub starts
Oct 11 04:31:50 compute-0 ceph-mon[74243]: 10.11 scrub ok
Oct 11 04:31:50 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Oct 11 04:31:50 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Oct 11 04:31:50 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Oct 11 04:31:50 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Oct 11 04:31:51 compute-0 ceph-mon[74243]: 6.f scrub starts
Oct 11 04:31:51 compute-0 ceph-mon[74243]: 6.f scrub ok
Oct 11 04:31:51 compute-0 ceph-mon[74243]: pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:51 compute-0 ceph-mon[74243]: 10.10 scrub starts
Oct 11 04:31:51 compute-0 ceph-mon[74243]: 10.10 scrub ok
Oct 11 04:31:51 compute-0 ceph-mon[74243]: 8.1d scrub starts
Oct 11 04:31:51 compute-0 ceph-mon[74243]: 8.1d scrub ok
Oct 11 04:31:51 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Oct 11 04:31:51 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Oct 11 04:31:51 compute-0 sudo[118995]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:52 compute-0 ceph-mon[74243]: 2.1b scrub starts
Oct 11 04:31:52 compute-0 ceph-mon[74243]: 2.1b scrub ok
Oct 11 04:31:52 compute-0 sudo[119302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqgagmnzpqpavogntvrspuhnhygelvcx ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1760157112.4117723-186-241060080537066/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1760157112.4117723-186-241060080537066/args'
Oct 11 04:31:52 compute-0 sudo[119302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:52 compute-0 sudo[119302]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:53 compute-0 ceph-mon[74243]: pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:53 compute-0 sudo[119469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfelnncsxjpzniwztrhjrlnduhdbqwzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157113.2709506-197-31297096231859/AnsiballZ_dnf.py'
Oct 11 04:31:53 compute-0 sudo[119469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:53 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Oct 11 04:31:53 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Oct 11 04:31:53 compute-0 python3.9[119471]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:31:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:54 compute-0 ceph-mon[74243]: 9.1b scrub starts
Oct 11 04:31:54 compute-0 ceph-mon[74243]: 9.1b scrub ok
Oct 11 04:31:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:55 compute-0 sudo[119469]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:55 compute-0 ceph-mon[74243]: pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:55 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Oct 11 04:31:56 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:31:56
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['volumes', '.rgw.root', 'vms', 'default.rgw.control', 'backups', 'default.rgw.log', 'images', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data']
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:31:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:31:56 compute-0 sudo[119622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekcgflnqowweddtnwpeufebckqvahsvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157115.4926357-210-196657692909273/AnsiballZ_package_facts.py'
Oct 11 04:31:56 compute-0 sudo[119622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:56 compute-0 python3.9[119624]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 11 04:31:56 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Oct 11 04:31:56 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Oct 11 04:31:56 compute-0 sudo[119622]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:57 compute-0 ceph-mon[74243]: 9.13 scrub starts
Oct 11 04:31:57 compute-0 ceph-mon[74243]: 9.13 scrub ok
Oct 11 04:31:57 compute-0 ceph-mon[74243]: pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:57 compute-0 ceph-mon[74243]: 9.3 scrub starts
Oct 11 04:31:57 compute-0 ceph-mon[74243]: 9.3 scrub ok
Oct 11 04:31:57 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.12 deep-scrub starts
Oct 11 04:31:57 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.12 deep-scrub ok
Oct 11 04:31:57 compute-0 sudo[119774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpdrmokyzbajdezrohctsfweoufqcphx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157117.145045-220-166431609357860/AnsiballZ_stat.py'
Oct 11 04:31:57 compute-0 sudo[119774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:57 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.d scrub starts
Oct 11 04:31:57 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.d scrub ok
Oct 11 04:31:57 compute-0 python3.9[119776]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:31:57 compute-0 sudo[119774]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:58 compute-0 sudo[119852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfiddrvpiuvhhteesirhcpzvcpotoxtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157117.145045-220-166431609357860/AnsiballZ_file.py'
Oct 11 04:31:58 compute-0 sudo[119852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:58 compute-0 ceph-mon[74243]: 10.12 deep-scrub starts
Oct 11 04:31:58 compute-0 ceph-mon[74243]: 10.12 deep-scrub ok
Oct 11 04:31:58 compute-0 ceph-mon[74243]: 9.d scrub starts
Oct 11 04:31:58 compute-0 ceph-mon[74243]: 9.d scrub ok
Oct 11 04:31:58 compute-0 python3.9[119854]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:31:58 compute-0 sudo[119852]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:58 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Oct 11 04:31:58 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Oct 11 04:31:58 compute-0 sudo[120004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjsrgjtjybidubjmifwpxgvtlfgtxgse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157118.5542188-232-89456237461721/AnsiballZ_stat.py'
Oct 11 04:31:58 compute-0 sudo[120004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:59 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Oct 11 04:31:59 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Oct 11 04:31:59 compute-0 python3.9[120006]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:31:59 compute-0 sudo[120004]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:59 compute-0 ceph-mon[74243]: pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:59 compute-0 ceph-mon[74243]: 5.1d scrub starts
Oct 11 04:31:59 compute-0 ceph-mon[74243]: 5.1d scrub ok
Oct 11 04:31:59 compute-0 sudo[120082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewigrkeqgbnmstutzhwocohvzwflqcus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157118.5542188-232-89456237461721/AnsiballZ_file.py'
Oct 11 04:31:59 compute-0 sudo[120082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:31:59 compute-0 python3.9[120084]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:31:59 compute-0 sudo[120082]: pam_unix(sudo:session): session closed for user root
Oct 11 04:31:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:00 compute-0 ceph-mon[74243]: 9.19 scrub starts
Oct 11 04:32:00 compute-0 ceph-mon[74243]: 9.19 scrub ok
Oct 11 04:32:00 compute-0 sudo[120234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqdabglwkwyxtoibzpknglqdxgnidgsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157120.1328526-250-203841081186347/AnsiballZ_lineinfile.py'
Oct 11 04:32:00 compute-0 sudo[120234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:00 compute-0 python3.9[120236]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:00 compute-0 sudo[120234]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:01 compute-0 ceph-mon[74243]: pgmap v287: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:01 compute-0 sudo[120386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fynpxcearawplgodcpmfwoqlspnjowwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157121.3691435-265-136651158386279/AnsiballZ_setup.py'
Oct 11 04:32:01 compute-0 sudo[120386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:01 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Oct 11 04:32:01 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Oct 11 04:32:01 compute-0 python3.9[120388]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:32:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:02 compute-0 sudo[120386]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:02 compute-0 ceph-mon[74243]: 9.5 scrub starts
Oct 11 04:32:02 compute-0 ceph-mon[74243]: 9.5 scrub ok
Oct 11 04:32:02 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Oct 11 04:32:02 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Oct 11 04:32:02 compute-0 sudo[120470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoskotecmbjkkqqhxpprbepufbjregjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157121.3691435-265-136651158386279/AnsiballZ_systemd.py'
Oct 11 04:32:02 compute-0 sudo[120470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:03 compute-0 python3.9[120472]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:32:03 compute-0 sudo[120470]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:03 compute-0 ceph-mon[74243]: pgmap v288: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:03 compute-0 ceph-mon[74243]: 5.1a scrub starts
Oct 11 04:32:03 compute-0 ceph-mon[74243]: 5.1a scrub ok
Oct 11 04:32:03 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Oct 11 04:32:03 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Oct 11 04:32:03 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.b scrub starts
Oct 11 04:32:03 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.b scrub ok
Oct 11 04:32:03 compute-0 sshd-session[115598]: Connection closed by 192.168.122.30 port 33318
Oct 11 04:32:03 compute-0 sshd-session[115582]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:32:03 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Oct 11 04:32:03 compute-0 systemd[1]: session-39.scope: Consumed 25.677s CPU time.
Oct 11 04:32:03 compute-0 systemd-logind[801]: Session 39 logged out. Waiting for processes to exit.
Oct 11 04:32:03 compute-0 systemd-logind[801]: Removed session 39.
Oct 11 04:32:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:04 compute-0 ceph-mon[74243]: 10.13 scrub starts
Oct 11 04:32:04 compute-0 ceph-mon[74243]: 10.13 scrub ok
Oct 11 04:32:04 compute-0 ceph-mon[74243]: 9.b scrub starts
Oct 11 04:32:04 compute-0 ceph-mon[74243]: 9.b scrub ok
Oct 11 04:32:04 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Oct 11 04:32:04 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Oct 11 04:32:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:05 compute-0 ceph-mon[74243]: pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:05 compute-0 ceph-mon[74243]: 10.14 scrub starts
Oct 11 04:32:05 compute-0 ceph-mon[74243]: 10.14 scrub ok
Oct 11 04:32:05 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Oct 11 04:32:05 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Oct 11 04:32:05 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct 11 04:32:05 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:32:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:06 compute-0 ceph-mon[74243]: 5.18 deep-scrub starts
Oct 11 04:32:06 compute-0 ceph-mon[74243]: 5.18 deep-scrub ok
Oct 11 04:32:06 compute-0 ceph-mon[74243]: 9.11 scrub starts
Oct 11 04:32:06 compute-0 ceph-mon[74243]: 9.11 scrub ok
Oct 11 04:32:07 compute-0 ceph-mon[74243]: pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:07 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct 11 04:32:07 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct 11 04:32:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:08 compute-0 ceph-mon[74243]: 5.19 scrub starts
Oct 11 04:32:08 compute-0 ceph-mon[74243]: 5.19 scrub ok
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:32:08.423062) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157128423205, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7216, "num_deletes": 251, "total_data_size": 9274632, "memory_usage": 9545464, "flush_reason": "Manual Compaction"}
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157128470869, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7438269, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 132, "largest_seqno": 7345, "table_properties": {"data_size": 7411664, "index_size": 17274, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 76505, "raw_average_key_size": 23, "raw_value_size": 7348696, "raw_average_value_size": 2240, "num_data_blocks": 759, "num_entries": 3280, "num_filter_entries": 3280, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156709, "oldest_key_time": 1760156709, "file_creation_time": 1760157128, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 47886 microseconds, and 25770 cpu microseconds.
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:32:08.470957) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7438269 bytes OK
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:32:08.470984) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:32:08.472776) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:32:08.472803) EVENT_LOG_v1 {"time_micros": 1760157128472794, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:32:08.472833) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9243153, prev total WAL file size 9243153, number of live WAL files 2.
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:32:08.476531) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7263KB) 13(50KB) 8(1944B)]
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157128476666, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7492003, "oldest_snapshot_seqno": -1}
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3092 keys, 7449101 bytes, temperature: kUnknown
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157128517050, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7449101, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7422978, "index_size": 17309, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7749, "raw_key_size": 74482, "raw_average_key_size": 24, "raw_value_size": 7361677, "raw_average_value_size": 2380, "num_data_blocks": 762, "num_entries": 3092, "num_filter_entries": 3092, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156706, "oldest_key_time": 0, "file_creation_time": 1760157128, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:32:08.517319) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7449101 bytes
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:32:08.520991) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.2 rd, 184.1 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.1, 0.0 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3381, records dropped: 289 output_compression: NoCompression
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:32:08.521022) EVENT_LOG_v1 {"time_micros": 1760157128521007, "job": 4, "event": "compaction_finished", "compaction_time_micros": 40459, "compaction_time_cpu_micros": 17725, "output_level": 6, "num_output_files": 1, "total_output_size": 7449101, "num_input_records": 3381, "num_output_records": 3092, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157128523720, "job": 4, "event": "table_file_deletion", "file_number": 19}
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157128523834, "job": 4, "event": "table_file_deletion", "file_number": 13}
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157128523889, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct 11 04:32:08 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:32:08.476379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:32:09 compute-0 sshd-session[120500]: Accepted publickey for zuul from 192.168.122.30 port 47916 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:32:09 compute-0 systemd-logind[801]: New session 40 of user zuul.
Oct 11 04:32:09 compute-0 systemd[1]: Started Session 40 of User zuul.
Oct 11 04:32:09 compute-0 sshd-session[120500]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:32:09 compute-0 ceph-mon[74243]: pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:09 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct 11 04:32:09 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct 11 04:32:09 compute-0 sudo[120653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcraimarhligqrubsccqqplvplkmvnzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157129.2432103-22-100360057783038/AnsiballZ_file.py'
Oct 11 04:32:09 compute-0 sudo[120653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:09 compute-0 python3.9[120655]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:10 compute-0 sudo[120653]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:10 compute-0 ceph-mon[74243]: 6.3 scrub starts
Oct 11 04:32:10 compute-0 ceph-mon[74243]: 6.3 scrub ok
Oct 11 04:32:10 compute-0 sudo[120805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvuvxbpprclgvgncbxlqrkbjutitdnrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157130.1776807-34-206354773652147/AnsiballZ_stat.py'
Oct 11 04:32:10 compute-0 sudo[120805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:10 compute-0 python3.9[120807]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:10 compute-0 sudo[120805]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:11 compute-0 sudo[120883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqbemczgrorsjtpufkytjpmhtgshhicd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157130.1776807-34-206354773652147/AnsiballZ_file.py'
Oct 11 04:32:11 compute-0 sudo[120883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:11 compute-0 python3.9[120885]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:11 compute-0 sudo[120883]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:11 compute-0 ceph-mon[74243]: pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:11 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct 11 04:32:11 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct 11 04:32:11 compute-0 sshd-session[120503]: Connection closed by 192.168.122.30 port 47916
Oct 11 04:32:11 compute-0 sshd-session[120500]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:32:11 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Oct 11 04:32:11 compute-0 systemd[1]: session-40.scope: Consumed 1.551s CPU time.
Oct 11 04:32:11 compute-0 systemd-logind[801]: Session 40 logged out. Waiting for processes to exit.
Oct 11 04:32:11 compute-0 systemd-logind[801]: Removed session 40.
Oct 11 04:32:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:12 compute-0 ceph-mon[74243]: 9.9 scrub starts
Oct 11 04:32:12 compute-0 ceph-mon[74243]: 9.9 scrub ok
Oct 11 04:32:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct 11 04:32:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct 11 04:32:12 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct 11 04:32:12 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct 11 04:32:13 compute-0 ceph-mon[74243]: pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:13 compute-0 ceph-mon[74243]: 6.2 scrub starts
Oct 11 04:32:13 compute-0 ceph-mon[74243]: 6.2 scrub ok
Oct 11 04:32:13 compute-0 ceph-mon[74243]: 6.7 scrub starts
Oct 11 04:32:13 compute-0 ceph-mon[74243]: 6.7 scrub ok
Oct 11 04:32:13 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Oct 11 04:32:13 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Oct 11 04:32:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:14 compute-0 ceph-mon[74243]: 9.1d scrub starts
Oct 11 04:32:14 compute-0 ceph-mon[74243]: 9.1d scrub ok
Oct 11 04:32:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:15 compute-0 ceph-mon[74243]: pgmap v294: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:16 compute-0 ceph-mon[74243]: pgmap v295: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:17 compute-0 sshd-session[120910]: Accepted publickey for zuul from 192.168.122.30 port 58194 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:32:17 compute-0 systemd-logind[801]: New session 41 of user zuul.
Oct 11 04:32:17 compute-0 systemd[1]: Started Session 41 of User zuul.
Oct 11 04:32:17 compute-0 sshd-session[120910]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:32:17 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct 11 04:32:17 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct 11 04:32:18 compute-0 ceph-mon[74243]: 6.6 scrub starts
Oct 11 04:32:18 compute-0 ceph-mon[74243]: 6.6 scrub ok
Oct 11 04:32:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:18 compute-0 python3.9[121063]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:32:19 compute-0 ceph-mon[74243]: pgmap v296: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:19 compute-0 sudo[121217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjfoogzscbjuynciijguizshrhxotinf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157138.7696059-33-76065270931451/AnsiballZ_file.py'
Oct 11 04:32:19 compute-0 sudo[121217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:19 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct 11 04:32:19 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct 11 04:32:19 compute-0 python3.9[121219]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:19 compute-0 sudo[121217]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:20 compute-0 ceph-mon[74243]: 6.e scrub starts
Oct 11 04:32:20 compute-0 ceph-mon[74243]: 6.e scrub ok
Oct 11 04:32:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:20 compute-0 sudo[121392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmrvxndawztbrhhjwwfmgggfdfreghhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157139.8139188-41-277895204373791/AnsiballZ_stat.py'
Oct 11 04:32:20 compute-0 sudo[121392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:20 compute-0 python3.9[121394]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:20 compute-0 sudo[121392]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:20 compute-0 sudo[121470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nljnliokjryvdzwwqbvnkftjobgsuzzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157139.8139188-41-277895204373791/AnsiballZ_file.py'
Oct 11 04:32:20 compute-0 sudo[121470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:21 compute-0 ceph-mon[74243]: pgmap v297: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:21 compute-0 python3.9[121472]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.f6za51ti recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:21 compute-0 sudo[121470]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:21 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.c scrub starts
Oct 11 04:32:21 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.c scrub ok
Oct 11 04:32:21 compute-0 sudo[121622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chbpstowtbojvoajwcgjhxosnvattbfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157141.485592-61-271334655147076/AnsiballZ_stat.py'
Oct 11 04:32:21 compute-0 sudo[121622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:22 compute-0 python3.9[121624]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:22 compute-0 ceph-mon[74243]: 6.c scrub starts
Oct 11 04:32:22 compute-0 ceph-mon[74243]: 6.c scrub ok
Oct 11 04:32:22 compute-0 sudo[121622]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:22 compute-0 sudo[121700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwumpcsrlfmwdeihiptjmgwtpnvlfwqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157141.485592-61-271334655147076/AnsiballZ_file.py'
Oct 11 04:32:22 compute-0 sudo[121700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:22 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct 11 04:32:22 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct 11 04:32:22 compute-0 python3.9[121702]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.7hmpjiag recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:22 compute-0 sudo[121700]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:23 compute-0 ceph-mon[74243]: pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:23 compute-0 ceph-mon[74243]: 6.4 scrub starts
Oct 11 04:32:23 compute-0 ceph-mon[74243]: 6.4 scrub ok
Oct 11 04:32:23 compute-0 sudo[121852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqelihebtnlxzxfogqwjpkyddldvlntl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157142.8744597-74-56209246471995/AnsiballZ_file.py'
Oct 11 04:32:23 compute-0 sudo[121852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:23 compute-0 python3.9[121854]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:32:23 compute-0 sudo[121852]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:24 compute-0 sudo[122004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiicsopcjxdtvmcdgmppreesihsfoobw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157143.6965733-82-179498057724638/AnsiballZ_stat.py'
Oct 11 04:32:24 compute-0 sudo[122004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:24 compute-0 python3.9[122006]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:24 compute-0 sudo[122004]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:24 compute-0 sudo[122082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpalsjygxbcdkhbnwovrxtcbfxelmvep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157143.6965733-82-179498057724638/AnsiballZ_file.py'
Oct 11 04:32:24 compute-0 sudo[122082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:24 compute-0 python3.9[122084]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:32:24 compute-0 sudo[122082]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:25 compute-0 ceph-mon[74243]: pgmap v299: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:25 compute-0 sudo[122234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mazqqewxdgzsmwnciahcyoghsuaywbjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157144.9852033-82-189221611463842/AnsiballZ_stat.py'
Oct 11 04:32:25 compute-0 sudo[122234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:25 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.b scrub starts
Oct 11 04:32:25 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.b scrub ok
Oct 11 04:32:25 compute-0 python3.9[122236]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:25 compute-0 sudo[122234]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:25 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Oct 11 04:32:25 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Oct 11 04:32:25 compute-0 sudo[122312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oksyakohhoplqeahmdmzkbgtpgydbbbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157144.9852033-82-189221611463842/AnsiballZ_file.py'
Oct 11 04:32:25 compute-0 sudo[122312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:26 compute-0 python3.9[122314]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:32:26 compute-0 sudo[122312]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:32:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:32:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:32:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:32:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:32:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:32:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:26 compute-0 ceph-mon[74243]: 6.b scrub starts
Oct 11 04:32:26 compute-0 ceph-mon[74243]: 6.b scrub ok
Oct 11 04:32:26 compute-0 ceph-mon[74243]: 9.1 scrub starts
Oct 11 04:32:26 compute-0 ceph-mon[74243]: 9.1 scrub ok
Oct 11 04:32:26 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.d scrub starts
Oct 11 04:32:26 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.d scrub ok
Oct 11 04:32:26 compute-0 sudo[122464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmmkkjjjandqhsghcjlekgconoqwkotg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157146.2688982-105-197602245714431/AnsiballZ_file.py'
Oct 11 04:32:26 compute-0 sudo[122464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:26 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct 11 04:32:26 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct 11 04:32:26 compute-0 python3.9[122466]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:26 compute-0 sudo[122464]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:27 compute-0 ceph-mon[74243]: pgmap v300: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:27 compute-0 ceph-mon[74243]: 6.d scrub starts
Oct 11 04:32:27 compute-0 ceph-mon[74243]: 6.d scrub ok
Oct 11 04:32:27 compute-0 ceph-mon[74243]: 6.5 scrub starts
Oct 11 04:32:27 compute-0 ceph-mon[74243]: 6.5 scrub ok
Oct 11 04:32:27 compute-0 sudo[122616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzlyaeavvtulauuxaugnwggedlgvfgcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157146.9495857-113-236837803631500/AnsiballZ_stat.py'
Oct 11 04:32:27 compute-0 sudo[122616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:27 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Oct 11 04:32:27 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Oct 11 04:32:27 compute-0 python3.9[122618]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:27 compute-0 sudo[122616]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:27 compute-0 sudo[122694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sedkhshgutsxviykhmkmcqyvyvdqlwkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157146.9495857-113-236837803631500/AnsiballZ_file.py'
Oct 11 04:32:27 compute-0 sudo[122694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:28 compute-0 python3.9[122696]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:28 compute-0 sudo[122694]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:28 compute-0 ceph-mon[74243]: 9.15 scrub starts
Oct 11 04:32:28 compute-0 ceph-mon[74243]: 9.15 scrub ok
Oct 11 04:32:28 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Oct 11 04:32:28 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Oct 11 04:32:28 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Oct 11 04:32:28 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Oct 11 04:32:28 compute-0 sudo[122846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmuyrnhvujnxsaeactunubwrywhgllmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157148.2230325-125-58889300148889/AnsiballZ_stat.py'
Oct 11 04:32:28 compute-0 sudo[122846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:28 compute-0 python3.9[122848]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:29 compute-0 sudo[122846]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:29 compute-0 sudo[122924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqkkfurzdrcydkvetrilygqxdwxvzhcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157148.2230325-125-58889300148889/AnsiballZ_file.py'
Oct 11 04:32:29 compute-0 sudo[122924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:29 compute-0 ceph-mon[74243]: pgmap v301: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:29 compute-0 ceph-mon[74243]: 9.1f scrub starts
Oct 11 04:32:29 compute-0 ceph-mon[74243]: 9.1f scrub ok
Oct 11 04:32:29 compute-0 ceph-mon[74243]: 6.9 scrub starts
Oct 11 04:32:29 compute-0 ceph-mon[74243]: 6.9 scrub ok
Oct 11 04:32:29 compute-0 python3.9[122926]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:29 compute-0 sudo[122924]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:30 compute-0 sudo[123076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaxjyozzrkwnhxcnseztccwhmrtelwwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157149.6620252-137-85322749875546/AnsiballZ_systemd.py'
Oct 11 04:32:30 compute-0 sudo[123076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:30 compute-0 python3.9[123078]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:32:30 compute-0 systemd[1]: Reloading.
Oct 11 04:32:30 compute-0 systemd-sysv-generator[123109]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:32:30 compute-0 systemd-rc-local-generator[123105]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:32:31 compute-0 sudo[123076]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:31 compute-0 ceph-mon[74243]: pgmap v302: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:31 compute-0 sudo[123265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dugqapggrulvnzyvziitxurkxmtcqyoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157151.2469041-145-160001178145334/AnsiballZ_stat.py'
Oct 11 04:32:31 compute-0 sudo[123265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:31 compute-0 python3.9[123267]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:31 compute-0 sudo[123265]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:32 compute-0 sudo[123343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfpejymwxeuvdztuwivybahgvqkokwxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157151.2469041-145-160001178145334/AnsiballZ_file.py'
Oct 11 04:32:32 compute-0 sudo[123343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:32 compute-0 python3.9[123345]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:32 compute-0 sudo[123343]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:33 compute-0 sudo[123495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swmjpchmuzvtvuwjoohhwfpldvksurbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157152.6113591-157-35067303536616/AnsiballZ_stat.py'
Oct 11 04:32:33 compute-0 sudo[123495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:33 compute-0 ceph-mon[74243]: pgmap v303: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:33 compute-0 python3.9[123497]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:33 compute-0 sudo[123495]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:33 compute-0 sudo[123560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:32:33 compute-0 sudo[123587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqtkwkgcrqdnvmeogdvombfcklqvlguc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157152.6113591-157-35067303536616/AnsiballZ_file.py'
Oct 11 04:32:33 compute-0 sudo[123560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:32:33 compute-0 sudo[123587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:33 compute-0 sudo[123560]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:33 compute-0 sudo[123601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:32:33 compute-0 sudo[123601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:32:33 compute-0 sudo[123601]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:33 compute-0 sudo[123626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:32:33 compute-0 sudo[123626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:32:33 compute-0 sudo[123626]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:33 compute-0 python3.9[123600]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:33 compute-0 sudo[123651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:32:33 compute-0 sudo[123651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:32:33 compute-0 sudo[123587]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:34 compute-0 sudo[123651]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:32:34 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:32:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:32:34 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:32:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:32:34 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:32:34 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev b3d7c4ed-e079-4ba0-8486-4e25f28ee42f does not exist
Oct 11 04:32:34 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 27d34d7b-cab5-4120-b41e-acb779e907c4 does not exist
Oct 11 04:32:34 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 0772c065-e083-4608-b52f-60430d9a9a4d does not exist
Oct 11 04:32:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:32:34 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:32:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:32:34 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:32:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:32:34 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:32:34 compute-0 sudo[123857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnvsirghzbdtdgsrcfbtwcdwicsvasea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157154.0878382-169-184305536479186/AnsiballZ_systemd.py'
Oct 11 04:32:34 compute-0 sudo[123857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:34 compute-0 sudo[123856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:32:34 compute-0 sudo[123856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:32:34 compute-0 sudo[123856]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:34 compute-0 sudo[123884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:32:34 compute-0 sudo[123884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:32:34 compute-0 sudo[123884]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:34 compute-0 sudo[123909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:32:34 compute-0 sudo[123909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:32:34 compute-0 sudo[123909]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:34 compute-0 sudo[123934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:32:34 compute-0 sudo[123934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:32:34 compute-0 python3.9[123872]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:32:34 compute-0 systemd[1]: Reloading.
Oct 11 04:32:34 compute-0 systemd-sysv-generator[124011]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:32:34 compute-0 systemd-rc-local-generator[124005]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:32:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:34 compute-0 podman[124030]: 2025-10-11 04:32:34.884529683 +0000 UTC m=+0.041777107 container create 7ec16e624624fad4b9c9e2b7a39c932942b5fff00f587a8cba27e5bb513ecad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_raman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 04:32:34 compute-0 systemd[1]: Started libpod-conmon-7ec16e624624fad4b9c9e2b7a39c932942b5fff00f587a8cba27e5bb513ecad8.scope.
Oct 11 04:32:34 compute-0 podman[124030]: 2025-10-11 04:32:34.869008105 +0000 UTC m=+0.026255529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:32:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:32:34 compute-0 podman[124030]: 2025-10-11 04:32:34.994701152 +0000 UTC m=+0.151948576 container init 7ec16e624624fad4b9c9e2b7a39c932942b5fff00f587a8cba27e5bb513ecad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:32:35 compute-0 podman[124030]: 2025-10-11 04:32:35.008966187 +0000 UTC m=+0.166213591 container start 7ec16e624624fad4b9c9e2b7a39c932942b5fff00f587a8cba27e5bb513ecad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:32:35 compute-0 podman[124030]: 2025-10-11 04:32:35.013469068 +0000 UTC m=+0.170716522 container attach 7ec16e624624fad4b9c9e2b7a39c932942b5fff00f587a8cba27e5bb513ecad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:32:35 compute-0 silly_raman[124048]: 167 167
Oct 11 04:32:35 compute-0 conmon[124048]: conmon 7ec16e624624fad4b9c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7ec16e624624fad4b9c9e2b7a39c932942b5fff00f587a8cba27e5bb513ecad8.scope/container/memory.events
Oct 11 04:32:35 compute-0 systemd[1]: Starting Create netns directory...
Oct 11 04:32:35 compute-0 systemd[1]: libpod-7ec16e624624fad4b9c9e2b7a39c932942b5fff00f587a8cba27e5bb513ecad8.scope: Deactivated successfully.
Oct 11 04:32:35 compute-0 podman[124030]: 2025-10-11 04:32:35.015198565 +0000 UTC m=+0.172445969 container died 7ec16e624624fad4b9c9e2b7a39c932942b5fff00f587a8cba27e5bb513ecad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_raman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct 11 04:32:35 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 04:32:35 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 04:32:35 compute-0 systemd[1]: Finished Create netns directory.
Oct 11 04:32:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-243846690e39bdb1a4b121717cdb49998242c706c23f21a30c973f4a425bcf25-merged.mount: Deactivated successfully.
Oct 11 04:32:35 compute-0 podman[124030]: 2025-10-11 04:32:35.057648449 +0000 UTC m=+0.214895853 container remove 7ec16e624624fad4b9c9e2b7a39c932942b5fff00f587a8cba27e5bb513ecad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:32:35 compute-0 sudo[123857]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:35 compute-0 systemd[1]: libpod-conmon-7ec16e624624fad4b9c9e2b7a39c932942b5fff00f587a8cba27e5bb513ecad8.scope: Deactivated successfully.
Oct 11 04:32:35 compute-0 podman[124099]: 2025-10-11 04:32:35.234777045 +0000 UTC m=+0.052174578 container create dae331bd2587f9439ee9ef06e258f853e661a935a95d53717897f4674d51b7e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:32:35 compute-0 systemd[1]: Started libpod-conmon-dae331bd2587f9439ee9ef06e258f853e661a935a95d53717897f4674d51b7e8.scope.
Oct 11 04:32:35 compute-0 podman[124099]: 2025-10-11 04:32:35.208701912 +0000 UTC m=+0.026099525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:32:35 compute-0 ceph-mon[74243]: pgmap v304: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:35 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:32:35 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:32:35 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:32:35 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:32:35 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:32:35 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:32:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:32:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ac811bc10618b2c9b9abb75fc962bf0516632c72a82a813a6e5ebdc231a4cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ac811bc10618b2c9b9abb75fc962bf0516632c72a82a813a6e5ebdc231a4cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ac811bc10618b2c9b9abb75fc962bf0516632c72a82a813a6e5ebdc231a4cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ac811bc10618b2c9b9abb75fc962bf0516632c72a82a813a6e5ebdc231a4cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ac811bc10618b2c9b9abb75fc962bf0516632c72a82a813a6e5ebdc231a4cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:35 compute-0 podman[124099]: 2025-10-11 04:32:35.364369638 +0000 UTC m=+0.181767201 container init dae331bd2587f9439ee9ef06e258f853e661a935a95d53717897f4674d51b7e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:32:35 compute-0 podman[124099]: 2025-10-11 04:32:35.370917215 +0000 UTC m=+0.188314748 container start dae331bd2587f9439ee9ef06e258f853e661a935a95d53717897f4674d51b7e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:32:35 compute-0 podman[124099]: 2025-10-11 04:32:35.374627775 +0000 UTC m=+0.192025328 container attach dae331bd2587f9439ee9ef06e258f853e661a935a95d53717897f4674d51b7e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 04:32:35 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.a deep-scrub starts
Oct 11 04:32:35 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.a deep-scrub ok
Oct 11 04:32:35 compute-0 python3.9[124246]: ansible-ansible.builtin.service_facts Invoked
Oct 11 04:32:35 compute-0 network[124263]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 04:32:35 compute-0 network[124264]: 'network-scripts' will be removed from distribution in near future.
Oct 11 04:32:35 compute-0 network[124265]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 04:32:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:36 compute-0 ceph-mon[74243]: 6.a deep-scrub starts
Oct 11 04:32:36 compute-0 ceph-mon[74243]: 6.a deep-scrub ok
Oct 11 04:32:36 compute-0 epic_matsumoto[124168]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:32:36 compute-0 epic_matsumoto[124168]: --> relative data size: 1.0
Oct 11 04:32:36 compute-0 epic_matsumoto[124168]: --> All data devices are unavailable
Oct 11 04:32:36 compute-0 podman[124099]: 2025-10-11 04:32:36.455900666 +0000 UTC m=+1.273298229 container died dae331bd2587f9439ee9ef06e258f853e661a935a95d53717897f4674d51b7e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:32:36 compute-0 systemd[1]: libpod-dae331bd2587f9439ee9ef06e258f853e661a935a95d53717897f4674d51b7e8.scope: Deactivated successfully.
Oct 11 04:32:36 compute-0 systemd[1]: libpod-dae331bd2587f9439ee9ef06e258f853e661a935a95d53717897f4674d51b7e8.scope: Consumed 1.041s CPU time.
Oct 11 04:32:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-02ac811bc10618b2c9b9abb75fc962bf0516632c72a82a813a6e5ebdc231a4cf-merged.mount: Deactivated successfully.
Oct 11 04:32:36 compute-0 podman[124099]: 2025-10-11 04:32:36.754711982 +0000 UTC m=+1.572109505 container remove dae331bd2587f9439ee9ef06e258f853e661a935a95d53717897f4674d51b7e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:32:36 compute-0 systemd[1]: libpod-conmon-dae331bd2587f9439ee9ef06e258f853e661a935a95d53717897f4674d51b7e8.scope: Deactivated successfully.
Oct 11 04:32:36 compute-0 sudo[123934]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:36 compute-0 sudo[124314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:32:36 compute-0 sudo[124314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:32:36 compute-0 sudo[124314]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:36 compute-0 sudo[124342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:32:36 compute-0 sudo[124342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:32:36 compute-0 sudo[124342]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:37 compute-0 sudo[124369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:32:37 compute-0 sudo[124369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:32:37 compute-0 sudo[124369]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:37 compute-0 sudo[124397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:32:37 compute-0 sudo[124397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:32:37 compute-0 ceph-mon[74243]: pgmap v305: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:37 compute-0 podman[124475]: 2025-10-11 04:32:37.478752662 +0000 UTC m=+0.040671507 container create ab0485274309a85ee72ab438b348419672e4e3ceded855d451a769d54c0a32a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elgamal, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:32:37 compute-0 systemd[1]: Started libpod-conmon-ab0485274309a85ee72ab438b348419672e4e3ceded855d451a769d54c0a32a6.scope.
Oct 11 04:32:37 compute-0 podman[124475]: 2025-10-11 04:32:37.462034422 +0000 UTC m=+0.023953267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:32:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:32:37 compute-0 podman[124475]: 2025-10-11 04:32:37.573854016 +0000 UTC m=+0.135772861 container init ab0485274309a85ee72ab438b348419672e4e3ceded855d451a769d54c0a32a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elgamal, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:32:37 compute-0 podman[124475]: 2025-10-11 04:32:37.58365386 +0000 UTC m=+0.145572685 container start ab0485274309a85ee72ab438b348419672e4e3ceded855d451a769d54c0a32a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elgamal, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:32:37 compute-0 podman[124475]: 2025-10-11 04:32:37.587110804 +0000 UTC m=+0.149029659 container attach ab0485274309a85ee72ab438b348419672e4e3ceded855d451a769d54c0a32a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:32:37 compute-0 musing_elgamal[124495]: 167 167
Oct 11 04:32:37 compute-0 podman[124475]: 2025-10-11 04:32:37.589094547 +0000 UTC m=+0.151013372 container died ab0485274309a85ee72ab438b348419672e4e3ceded855d451a769d54c0a32a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:32:37 compute-0 systemd[1]: libpod-ab0485274309a85ee72ab438b348419672e4e3ceded855d451a769d54c0a32a6.scope: Deactivated successfully.
Oct 11 04:32:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f896714e056b4ec5be1eebf451d5429369723a5f59818a4cf269510d69a6ce2-merged.mount: Deactivated successfully.
Oct 11 04:32:37 compute-0 podman[124475]: 2025-10-11 04:32:37.628247013 +0000 UTC m=+0.190165848 container remove ab0485274309a85ee72ab438b348419672e4e3ceded855d451a769d54c0a32a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:32:37 compute-0 systemd[1]: libpod-conmon-ab0485274309a85ee72ab438b348419672e4e3ceded855d451a769d54c0a32a6.scope: Deactivated successfully.
Oct 11 04:32:37 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Oct 11 04:32:37 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Oct 11 04:32:37 compute-0 podman[124531]: 2025-10-11 04:32:37.801413631 +0000 UTC m=+0.033879574 container create a28ae4d58cc3fb9738c5fc7ab2a781dd57f756473445952363eaf6077cc4dc82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:32:37 compute-0 systemd[1]: Started libpod-conmon-a28ae4d58cc3fb9738c5fc7ab2a781dd57f756473445952363eaf6077cc4dc82.scope.
Oct 11 04:32:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:32:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9342d90bf8f6ce9fe49a15bdc5aecb458c1fb4366ef7f34a0bf5b9891f43865/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9342d90bf8f6ce9fe49a15bdc5aecb458c1fb4366ef7f34a0bf5b9891f43865/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9342d90bf8f6ce9fe49a15bdc5aecb458c1fb4366ef7f34a0bf5b9891f43865/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9342d90bf8f6ce9fe49a15bdc5aecb458c1fb4366ef7f34a0bf5b9891f43865/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:37 compute-0 podman[124531]: 2025-10-11 04:32:37.864535323 +0000 UTC m=+0.097001276 container init a28ae4d58cc3fb9738c5fc7ab2a781dd57f756473445952363eaf6077cc4dc82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 04:32:37 compute-0 podman[124531]: 2025-10-11 04:32:37.872097177 +0000 UTC m=+0.104563120 container start a28ae4d58cc3fb9738c5fc7ab2a781dd57f756473445952363eaf6077cc4dc82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chebyshev, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:32:37 compute-0 podman[124531]: 2025-10-11 04:32:37.87517147 +0000 UTC m=+0.107637413 container attach a28ae4d58cc3fb9738c5fc7ab2a781dd57f756473445952363eaf6077cc4dc82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chebyshev, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:32:37 compute-0 podman[124531]: 2025-10-11 04:32:37.788501603 +0000 UTC m=+0.020967566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:32:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:38 compute-0 ceph-mon[74243]: 9.16 scrub starts
Oct 11 04:32:38 compute-0 ceph-mon[74243]: 9.16 scrub ok
Oct 11 04:32:38 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]: {
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:     "0": [
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:         {
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "devices": [
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "/dev/loop3"
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             ],
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "lv_name": "ceph_lv0",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "lv_size": "21470642176",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "name": "ceph_lv0",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "tags": {
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.cluster_name": "ceph",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.crush_device_class": "",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.encrypted": "0",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.osd_id": "0",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.type": "block",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.vdo": "0"
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             },
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "type": "block",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "vg_name": "ceph_vg0"
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:         }
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:     ],
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:     "1": [
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:         {
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "devices": [
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "/dev/loop4"
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             ],
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "lv_name": "ceph_lv1",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "lv_size": "21470642176",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "name": "ceph_lv1",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "tags": {
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.cluster_name": "ceph",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.crush_device_class": "",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.encrypted": "0",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.osd_id": "1",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.type": "block",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.vdo": "0"
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             },
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "type": "block",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "vg_name": "ceph_vg1"
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:         }
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:     ],
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:     "2": [
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:         {
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "devices": [
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "/dev/loop5"
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             ],
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "lv_name": "ceph_lv2",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "lv_size": "21470642176",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "name": "ceph_lv2",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "tags": {
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.cluster_name": "ceph",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.crush_device_class": "",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.encrypted": "0",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.osd_id": "2",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.type": "block",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:                 "ceph.vdo": "0"
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             },
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "type": "block",
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:             "vg_name": "ceph_vg2"
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:         }
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]:     ]
Oct 11 04:32:38 compute-0 elegant_chebyshev[124551]: }
Oct 11 04:32:38 compute-0 systemd[1]: libpod-a28ae4d58cc3fb9738c5fc7ab2a781dd57f756473445952363eaf6077cc4dc82.scope: Deactivated successfully.
Oct 11 04:32:38 compute-0 podman[124531]: 2025-10-11 04:32:38.688799874 +0000 UTC m=+0.921265877 container died a28ae4d58cc3fb9738c5fc7ab2a781dd57f756473445952363eaf6077cc4dc82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chebyshev, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:32:38 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Oct 11 04:32:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9342d90bf8f6ce9fe49a15bdc5aecb458c1fb4366ef7f34a0bf5b9891f43865-merged.mount: Deactivated successfully.
Oct 11 04:32:38 compute-0 podman[124531]: 2025-10-11 04:32:38.741494745 +0000 UTC m=+0.973960688 container remove a28ae4d58cc3fb9738c5fc7ab2a781dd57f756473445952363eaf6077cc4dc82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chebyshev, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:32:38 compute-0 systemd[1]: libpod-conmon-a28ae4d58cc3fb9738c5fc7ab2a781dd57f756473445952363eaf6077cc4dc82.scope: Deactivated successfully.
Oct 11 04:32:38 compute-0 sudo[124397]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:38 compute-0 sudo[124622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:32:38 compute-0 sudo[124622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:32:38 compute-0 sudo[124622]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:38 compute-0 sudo[124650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:32:38 compute-0 sudo[124650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:32:38 compute-0 sudo[124650]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:38 compute-0 sudo[124679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:32:38 compute-0 sudo[124679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:32:38 compute-0 sudo[124679]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:38 compute-0 sudo[124705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:32:38 compute-0 sudo[124705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:32:39 compute-0 podman[124794]: 2025-10-11 04:32:39.277387302 +0000 UTC m=+0.036875685 container create 0af12ffbaad1f61adfb837971b4cb7847010a0fcdfaeef7e1ced1d7f4593cc2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lamport, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:32:39 compute-0 systemd[1]: Started libpod-conmon-0af12ffbaad1f61adfb837971b4cb7847010a0fcdfaeef7e1ced1d7f4593cc2f.scope.
Oct 11 04:32:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:32:39 compute-0 ceph-mon[74243]: pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:39 compute-0 ceph-mon[74243]: 9.1c scrub starts
Oct 11 04:32:39 compute-0 ceph-mon[74243]: 9.1c scrub ok
Oct 11 04:32:39 compute-0 podman[124794]: 2025-10-11 04:32:39.354243975 +0000 UTC m=+0.113732388 container init 0af12ffbaad1f61adfb837971b4cb7847010a0fcdfaeef7e1ced1d7f4593cc2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lamport, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:32:39 compute-0 podman[124794]: 2025-10-11 04:32:39.26059415 +0000 UTC m=+0.020082563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:32:39 compute-0 podman[124794]: 2025-10-11 04:32:39.362123527 +0000 UTC m=+0.121611910 container start 0af12ffbaad1f61adfb837971b4cb7847010a0fcdfaeef7e1ced1d7f4593cc2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Oct 11 04:32:39 compute-0 podman[124794]: 2025-10-11 04:32:39.365485068 +0000 UTC m=+0.124973531 container attach 0af12ffbaad1f61adfb837971b4cb7847010a0fcdfaeef7e1ced1d7f4593cc2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lamport, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:32:39 compute-0 youthful_lamport[124810]: 167 167
Oct 11 04:32:39 compute-0 systemd[1]: libpod-0af12ffbaad1f61adfb837971b4cb7847010a0fcdfaeef7e1ced1d7f4593cc2f.scope: Deactivated successfully.
Oct 11 04:32:39 compute-0 podman[124794]: 2025-10-11 04:32:39.367541443 +0000 UTC m=+0.127029826 container died 0af12ffbaad1f61adfb837971b4cb7847010a0fcdfaeef7e1ced1d7f4593cc2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:32:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2724e2fbd3392c1f0e1a7229574229faa578fff02d5df6bce0c4176a01b01154-merged.mount: Deactivated successfully.
Oct 11 04:32:39 compute-0 podman[124794]: 2025-10-11 04:32:39.398723494 +0000 UTC m=+0.158211877 container remove 0af12ffbaad1f61adfb837971b4cb7847010a0fcdfaeef7e1ced1d7f4593cc2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:32:39 compute-0 systemd[1]: libpod-conmon-0af12ffbaad1f61adfb837971b4cb7847010a0fcdfaeef7e1ced1d7f4593cc2f.scope: Deactivated successfully.
Oct 11 04:32:39 compute-0 podman[124841]: 2025-10-11 04:32:39.545816629 +0000 UTC m=+0.045374874 container create d5467235a542870664f534144f0e2cda99d96e148e57a86f7e213be199ed62bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 04:32:39 compute-0 systemd[1]: Started libpod-conmon-d5467235a542870664f534144f0e2cda99d96e148e57a86f7e213be199ed62bf.scope.
Oct 11 04:32:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:32:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1172f9ad9153b3dff0606d9bd9d93cd28d1423c680af2d2f86b2656f5fe36e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1172f9ad9153b3dff0606d9bd9d93cd28d1423c680af2d2f86b2656f5fe36e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1172f9ad9153b3dff0606d9bd9d93cd28d1423c680af2d2f86b2656f5fe36e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1172f9ad9153b3dff0606d9bd9d93cd28d1423c680af2d2f86b2656f5fe36e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:39 compute-0 podman[124841]: 2025-10-11 04:32:39.528874223 +0000 UTC m=+0.028432448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:32:39 compute-0 podman[124841]: 2025-10-11 04:32:39.621598882 +0000 UTC m=+0.121157117 container init d5467235a542870664f534144f0e2cda99d96e148e57a86f7e213be199ed62bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:32:39 compute-0 podman[124841]: 2025-10-11 04:32:39.629451094 +0000 UTC m=+0.129009299 container start d5467235a542870664f534144f0e2cda99d96e148e57a86f7e213be199ed62bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:32:39 compute-0 podman[124841]: 2025-10-11 04:32:39.63263954 +0000 UTC m=+0.132197785 container attach d5467235a542870664f534144f0e2cda99d96e148e57a86f7e213be199ed62bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:32:39 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Oct 11 04:32:39 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Oct 11 04:32:39 compute-0 sudo[124981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnxqxiipdpxovltdvzyxdkhhihjkxium ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157159.5141573-195-248376470519861/AnsiballZ_stat.py'
Oct 11 04:32:39 compute-0 sudo[124981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:39 compute-0 python3.9[124983]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:40 compute-0 sudo[124981]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:40 compute-0 sudo[125059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsadxtvdhhbajpjffekbvjcfdzajwdea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157159.5141573-195-248376470519861/AnsiballZ_file.py'
Oct 11 04:32:40 compute-0 sudo[125059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:40 compute-0 ceph-mon[74243]: 9.1e scrub starts
Oct 11 04:32:40 compute-0 ceph-mon[74243]: 9.1e scrub ok
Oct 11 04:32:40 compute-0 python3.9[125063]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:40 compute-0 sudo[125059]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:40 compute-0 amazing_wright[124903]: {
Oct 11 04:32:40 compute-0 amazing_wright[124903]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:32:40 compute-0 amazing_wright[124903]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:32:40 compute-0 amazing_wright[124903]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:32:40 compute-0 amazing_wright[124903]:         "osd_id": 1,
Oct 11 04:32:40 compute-0 amazing_wright[124903]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:32:40 compute-0 amazing_wright[124903]:         "type": "bluestore"
Oct 11 04:32:40 compute-0 amazing_wright[124903]:     },
Oct 11 04:32:40 compute-0 amazing_wright[124903]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:32:40 compute-0 amazing_wright[124903]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:32:40 compute-0 amazing_wright[124903]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:32:40 compute-0 amazing_wright[124903]:         "osd_id": 0,
Oct 11 04:32:40 compute-0 amazing_wright[124903]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:32:40 compute-0 amazing_wright[124903]:         "type": "bluestore"
Oct 11 04:32:40 compute-0 amazing_wright[124903]:     },
Oct 11 04:32:40 compute-0 amazing_wright[124903]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:32:40 compute-0 amazing_wright[124903]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:32:40 compute-0 amazing_wright[124903]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:32:40 compute-0 amazing_wright[124903]:         "osd_id": 2,
Oct 11 04:32:40 compute-0 amazing_wright[124903]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:32:40 compute-0 amazing_wright[124903]:         "type": "bluestore"
Oct 11 04:32:40 compute-0 amazing_wright[124903]:     }
Oct 11 04:32:40 compute-0 amazing_wright[124903]: }
Oct 11 04:32:40 compute-0 systemd[1]: libpod-d5467235a542870664f534144f0e2cda99d96e148e57a86f7e213be199ed62bf.scope: Deactivated successfully.
Oct 11 04:32:40 compute-0 podman[124841]: 2025-10-11 04:32:40.538222135 +0000 UTC m=+1.037780360 container died d5467235a542870664f534144f0e2cda99d96e148e57a86f7e213be199ed62bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:32:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1172f9ad9153b3dff0606d9bd9d93cd28d1423c680af2d2f86b2656f5fe36e2-merged.mount: Deactivated successfully.
Oct 11 04:32:40 compute-0 podman[124841]: 2025-10-11 04:32:40.599881547 +0000 UTC m=+1.099439762 container remove d5467235a542870664f534144f0e2cda99d96e148e57a86f7e213be199ed62bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:32:40 compute-0 systemd[1]: libpod-conmon-d5467235a542870664f534144f0e2cda99d96e148e57a86f7e213be199ed62bf.scope: Deactivated successfully.
Oct 11 04:32:40 compute-0 sudo[124705]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:32:40 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:32:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:32:40 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:32:40 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev b601fc73-3084-4974-852c-6eae9cc5596e does not exist
Oct 11 04:32:40 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev d11cbfbc-f9e4-4a95-bf6b-cd5b9222a129 does not exist
Oct 11 04:32:40 compute-0 sudo[125149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:32:40 compute-0 sudo[125149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:32:40 compute-0 sudo[125149]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:40 compute-0 sudo[125191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:32:40 compute-0 sudo[125191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:32:40 compute-0 sudo[125191]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:40 compute-0 sudo[125301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkpbfxfvoopgkypyqwcwrtyiuxlwgufi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157160.6552358-208-135783488946402/AnsiballZ_file.py'
Oct 11 04:32:40 compute-0 sudo[125301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:41 compute-0 python3.9[125303]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:41 compute-0 sudo[125301]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:41 compute-0 ceph-mon[74243]: pgmap v307: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:41 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:32:41 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:32:41 compute-0 sudo[125453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erssugqgkxdjirupwshgsfrtgrptcrcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157161.4204738-216-133020012843188/AnsiballZ_stat.py'
Oct 11 04:32:41 compute-0 sudo[125453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:41 compute-0 python3.9[125455]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:42 compute-0 sudo[125453]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:42 compute-0 sudo[125531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gswdghyobmsnwnvngepmegjbwmcrhfhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157161.4204738-216-133020012843188/AnsiballZ_file.py'
Oct 11 04:32:42 compute-0 sudo[125531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:42 compute-0 python3.9[125533]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:42 compute-0 sudo[125531]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:43 compute-0 sudo[125683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twqynmpktuncbisetxepuzxshfkusquy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157162.812035-231-51625057825212/AnsiballZ_timezone.py'
Oct 11 04:32:43 compute-0 sudo[125683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:43 compute-0 ceph-mon[74243]: pgmap v308: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:43 compute-0 python3.9[125685]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 11 04:32:43 compute-0 systemd[1]: Starting Time & Date Service...
Oct 11 04:32:43 compute-0 systemd[1]: Started Time & Date Service.
Oct 11 04:32:43 compute-0 sudo[125683]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:44 compute-0 sudo[125839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rawihsdudewphjjghsrbrdtnrydqytaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157163.8954585-240-206742717179884/AnsiballZ_file.py'
Oct 11 04:32:44 compute-0 sudo[125839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:44 compute-0 python3.9[125841]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:44 compute-0 sudo[125839]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:44 compute-0 sudo[125991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avnvrztwwvfotrefgjoboprmbqftguiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157164.5602303-248-172448750995181/AnsiballZ_stat.py'
Oct 11 04:32:44 compute-0 sudo[125991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:45 compute-0 python3.9[125993]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:45 compute-0 sudo[125991]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:45 compute-0 ceph-mon[74243]: pgmap v309: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:45 compute-0 sudo[126069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlqyuuzwzylnxyeixhizwjttgqxlzxgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157164.5602303-248-172448750995181/AnsiballZ_file.py'
Oct 11 04:32:45 compute-0 sudo[126069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:45 compute-0 python3.9[126071]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:45 compute-0 sudo[126069]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:46 compute-0 sudo[126221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caryfhmwpjusowbmsbxwyjglpxtaumtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157165.9064481-260-263008316584471/AnsiballZ_stat.py'
Oct 11 04:32:46 compute-0 sudo[126221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:46 compute-0 python3.9[126223]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:46 compute-0 sudo[126221]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:46 compute-0 sudo[126299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfagflgukbdlwdnxbbqyxuwgiwiyevuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157165.9064481-260-263008316584471/AnsiballZ_file.py'
Oct 11 04:32:46 compute-0 sudo[126299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:46 compute-0 python3.9[126301]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._vt_63gu recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:47 compute-0 sudo[126299]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:47 compute-0 ceph-mon[74243]: pgmap v310: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:47 compute-0 sudo[126451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxoroxbnnawiammxvgushgtmswzduvgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157167.1958983-272-258168794180092/AnsiballZ_stat.py'
Oct 11 04:32:47 compute-0 sudo[126451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:47 compute-0 python3.9[126453]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:47 compute-0 sudo[126451]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:48 compute-0 sudo[126529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnvxsbmtgbwmoyyseelcpaawsawfxigm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157167.1958983-272-258168794180092/AnsiballZ_file.py'
Oct 11 04:32:48 compute-0 sudo[126529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:48 compute-0 python3.9[126531]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:48 compute-0 sudo[126529]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:48 compute-0 sudo[126681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykglkamrpruyisnjgikfrmatbwamafio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157168.5541573-285-29915548912057/AnsiballZ_command.py'
Oct 11 04:32:48 compute-0 sudo[126681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:49 compute-0 python3.9[126683]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:32:49 compute-0 sudo[126681]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:49 compute-0 ceph-mon[74243]: pgmap v311: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:49 compute-0 sudo[126834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muauzpcitkaehsdczrrfhphxcxixdqol ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760157169.409492-293-221108427592287/AnsiballZ_edpm_nftables_from_files.py'
Oct 11 04:32:49 compute-0 sudo[126834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:50 compute-0 python3[126836]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 11 04:32:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:50 compute-0 sudo[126834]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:50 compute-0 sudo[126986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iogbmyyhrzxxzfsvoaobkbrmgodzefnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157170.4119651-301-187178063966117/AnsiballZ_stat.py'
Oct 11 04:32:50 compute-0 sudo[126986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:51 compute-0 python3.9[126988]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:51 compute-0 sudo[126986]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:51 compute-0 sudo[127064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erfornpmyuxlzgnuiinibsvvgbefuujd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157170.4119651-301-187178063966117/AnsiballZ_file.py'
Oct 11 04:32:51 compute-0 sudo[127064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:51 compute-0 ceph-mon[74243]: pgmap v312: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:51 compute-0 python3.9[127066]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:51 compute-0 sudo[127064]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:52 compute-0 sudo[127216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxbizfaowtydgswfhfwtdfuuupksmuay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157171.7722254-313-8329516292738/AnsiballZ_stat.py'
Oct 11 04:32:52 compute-0 sudo[127216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:52 compute-0 python3.9[127218]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:52 compute-0 sudo[127216]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:52 compute-0 sudo[127294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjwdpzngeinbpwgwlredexlsggdebcdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157171.7722254-313-8329516292738/AnsiballZ_file.py'
Oct 11 04:32:52 compute-0 sudo[127294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:52 compute-0 python3.9[127296]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:52 compute-0 sudo[127294]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:53 compute-0 ceph-mon[74243]: pgmap v313: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:53 compute-0 sudo[127446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrotjceicdzezxwowczchkjaflykbjid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157173.088419-325-93470307856338/AnsiballZ_stat.py'
Oct 11 04:32:53 compute-0 sudo[127446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:53 compute-0 python3.9[127448]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:53 compute-0 sudo[127446]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:54 compute-0 sudo[127524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oflryxkpikmpityaremlwxsozkqwgync ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157173.088419-325-93470307856338/AnsiballZ_file.py'
Oct 11 04:32:54 compute-0 sudo[127524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:54 compute-0 python3.9[127526]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:54 compute-0 sudo[127524]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:54 compute-0 sudo[127676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdkogwzyrzlgnxdrldgdegklmltkttdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157174.4698763-337-182257034745108/AnsiballZ_stat.py'
Oct 11 04:32:54 compute-0 sudo[127676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:55 compute-0 python3.9[127678]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:55 compute-0 sudo[127676]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:55 compute-0 sudo[127754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpeqklhdgwcutadkbivsbnyfjxdbqkno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157174.4698763-337-182257034745108/AnsiballZ_file.py'
Oct 11 04:32:55 compute-0 sudo[127754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:55 compute-0 ceph-mon[74243]: pgmap v314: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:55 compute-0 python3.9[127756]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:55 compute-0 sudo[127754]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:32:56
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'default.rgw.control', 'vms', 'backups', 'cephfs.cephfs.data']
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:32:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:56 compute-0 sudo[127906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whpprdgbgfsqfjvrnwhusncigvwmbuwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157175.8227673-349-211604819604856/AnsiballZ_stat.py'
Oct 11 04:32:56 compute-0 sudo[127906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:56 compute-0 python3.9[127908]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:56 compute-0 sudo[127906]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:56 compute-0 sudo[127984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uofpglodxbtvbkgxuefdxolvpvkpapts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157175.8227673-349-211604819604856/AnsiballZ_file.py'
Oct 11 04:32:56 compute-0 sudo[127984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:56 compute-0 python3.9[127986]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:57 compute-0 sudo[127984]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:57 compute-0 ceph-mon[74243]: pgmap v315: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:57 compute-0 sudo[128136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ostlmghcppwewdjeplepkuqcfpgcmywh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157177.2403786-362-95969638292637/AnsiballZ_command.py'
Oct 11 04:32:57 compute-0 sudo[128136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:57 compute-0 python3.9[128138]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:32:57 compute-0 sudo[128136]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:58 compute-0 sudo[128291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihxcctapqdufacfiweuvojfnziazexvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157178.0694854-370-183060754357697/AnsiballZ_blockinfile.py'
Oct 11 04:32:58 compute-0 sudo[128291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:58 compute-0 ceph-mon[74243]: pgmap v316: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:58 compute-0 python3.9[128293]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:58 compute-0 sudo[128291]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:59 compute-0 sudo[128443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lizwcjpvobtnevcxtswbhpeojvveibuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157178.991964-379-50667168868049/AnsiballZ_file.py'
Oct 11 04:32:59 compute-0 sudo[128443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:32:59 compute-0 python3.9[128445]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:59 compute-0 sudo[128443]: pam_unix(sudo:session): session closed for user root
Oct 11 04:32:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:59 compute-0 sudo[128595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwnalflhqkawlojubwlxlqmiodyjtqoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157179.625578-379-185772146789694/AnsiballZ_file.py'
Oct 11 04:32:59 compute-0 sudo[128595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:00 compute-0 python3.9[128597]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:33:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:00 compute-0 sudo[128595]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:00 compute-0 sudo[128747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkwdbixliytfypmoaipsngxmpnryezst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157180.425754-394-207764608994138/AnsiballZ_mount.py'
Oct 11 04:33:00 compute-0 sudo[128747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:01 compute-0 ceph-mon[74243]: pgmap v317: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:01 compute-0 python3.9[128749]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 11 04:33:01 compute-0 sudo[128747]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:01 compute-0 sudo[128899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbapmzgcwidwghpkejvubgikvvtbqouv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157181.4382548-394-12249598577074/AnsiballZ_mount.py'
Oct 11 04:33:01 compute-0 sudo[128899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:02 compute-0 python3.9[128901]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 11 04:33:02 compute-0 sudo[128899]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:02 compute-0 sshd-session[120913]: Connection closed by 192.168.122.30 port 58194
Oct 11 04:33:02 compute-0 sshd-session[120910]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:33:02 compute-0 systemd-logind[801]: Session 41 logged out. Waiting for processes to exit.
Oct 11 04:33:02 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Oct 11 04:33:02 compute-0 systemd[1]: session-41.scope: Consumed 33.929s CPU time.
Oct 11 04:33:02 compute-0 systemd-logind[801]: Removed session 41.
Oct 11 04:33:03 compute-0 ceph-mon[74243]: pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:05 compute-0 ceph-mon[74243]: pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:33:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:07 compute-0 ceph-mon[74243]: pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:07 compute-0 sshd-session[128926]: Accepted publickey for zuul from 192.168.122.30 port 47782 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:33:07 compute-0 systemd-logind[801]: New session 42 of user zuul.
Oct 11 04:33:07 compute-0 systemd[1]: Started Session 42 of User zuul.
Oct 11 04:33:07 compute-0 sshd-session[128926]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:33:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:08 compute-0 sudo[129079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzacaxwadhfcqlxtkxttdhwjrkhideua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157187.8373845-16-125945937489536/AnsiballZ_tempfile.py'
Oct 11 04:33:08 compute-0 sudo[129079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:08 compute-0 python3.9[129081]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 11 04:33:08 compute-0 sudo[129079]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:09 compute-0 ceph-mon[74243]: pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:09 compute-0 sudo[129231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwwrablyghzqmbybjrspoyfcpwtinnzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157188.9167478-28-130151578307503/AnsiballZ_stat.py'
Oct 11 04:33:09 compute-0 sudo[129231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:09 compute-0 python3.9[129233]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:33:09 compute-0 sudo[129231]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:10 compute-0 sudo[129385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvryvuxeavsycvzqefpfpoorxxusbyyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157189.9500115-36-150660686814778/AnsiballZ_slurp.py'
Oct 11 04:33:10 compute-0 sudo[129385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:10 compute-0 python3.9[129387]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct 11 04:33:10 compute-0 sudo[129385]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:11 compute-0 ceph-mon[74243]: pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:11 compute-0 sudo[129537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnsjmtledowasldqdrriemmvdeislrht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157190.9560072-44-54274130770841/AnsiballZ_stat.py'
Oct 11 04:33:11 compute-0 sudo[129537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:11 compute-0 python3.9[129539]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.knjb992u follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:33:11 compute-0 sudo[129537]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:12 compute-0 sudo[129662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmxjwstbotvtferoitrtdbcxbklyiows ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157190.9560072-44-54274130770841/AnsiballZ_copy.py'
Oct 11 04:33:12 compute-0 sudo[129662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:12 compute-0 python3.9[129664]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.knjb992u mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760157190.9560072-44-54274130770841/.source.knjb992u _original_basename=.tp1bt461 follow=False checksum=6fb05d5da858ebe01d912990210ef5e5493d540e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:33:12 compute-0 sudo[129662]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:13 compute-0 sudo[129814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpiezjckoozbnmjzxoxjszingfymgjlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157192.5439248-59-27436182147386/AnsiballZ_setup.py'
Oct 11 04:33:13 compute-0 sudo[129814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:13 compute-0 ceph-mon[74243]: pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:13 compute-0 python3.9[129816]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:33:13 compute-0 sudo[129814]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:13 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 11 04:33:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:14 compute-0 sudo[129968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylsempropbsorellfkhjpanxfopqzskg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157193.700334-68-166654532205708/AnsiballZ_blockinfile.py'
Oct 11 04:33:14 compute-0 sudo[129968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:14 compute-0 python3.9[129970]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC98s6nG+nIFTH4d2pwEixa+DJEWOJ4blLMWa84WWX9ZzJX1vuA0e9J5uq6cH97m3O+InK1clAUjiVvvB7bYmuvrBCOXbOk7Hcq1CqYZe09e/jvGf8rsnAKJ50XqoIs9MsM9wzJnMPYBXRSTZEwgna4bcfIEyGg6C51MV4UkYlkXQtLQM4FkjcLwHgW1Gyr6vbc6yeKAl4kAxhgFKYlMGk5sWvV8yJ/SkMQyfjcTg9BqHEE5zDDU6893EPNAs+SK0NAR6OxpLhYHOLZJNPwtJh9awGVyIevc6TaXcoKDAi6bo6gQdBNNyGqgHOixvhHJRc6DVHHGLLEDHFdIbK2DpzrhuwAGuRjr1ab2VGI0eGz0ZAaOOsdG/N1nj08Gu2Ns7NelYH4PzBs+AA3e71Fo9z6GqEibMwJh/rVE73Qk0ihF0oltKiNLvdBnxdcTbHVc1bCjoW7qpqv/+8YmedxgmaXL0No8qXpTNTV/JC0S307AE6yIUpSl8jAzhFWPYUefLM=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN/DSXtcK2e0dGGdt91oDzWKSAIegFjTFcuab+G+SEv4
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM9xmo/d3xsMMtMrdBqm2I5UMewH3ZXERUGx/kC0Q3DIPzbn2sLVYLCJiUqzQvRwQaqqa+IS4GYn44enOiRErgI=
                                              create=True mode=0644 path=/tmp/ansible.knjb992u state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:33:14 compute-0 sudo[129968]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:15 compute-0 sudo[130120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjytthfrvwnzyyfeadgrdycyopbkxpur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157194.7426064-76-207403466895876/AnsiballZ_command.py'
Oct 11 04:33:15 compute-0 sudo[130120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:15 compute-0 ceph-mon[74243]: pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:15 compute-0 python3.9[130122]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.knjb992u' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:33:15 compute-0 sudo[130120]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:16 compute-0 sudo[130274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcipewjckadexyoixhxgjbepbpmcyrfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157195.7097673-84-78098101142564/AnsiballZ_file.py'
Oct 11 04:33:16 compute-0 sudo[130274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:16 compute-0 python3.9[130276]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.knjb992u state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:33:16 compute-0 sudo[130274]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:16 compute-0 sshd-session[128929]: Connection closed by 192.168.122.30 port 47782
Oct 11 04:33:16 compute-0 sshd-session[128926]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:33:16 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Oct 11 04:33:16 compute-0 systemd[1]: session-42.scope: Consumed 6.219s CPU time.
Oct 11 04:33:16 compute-0 systemd-logind[801]: Session 42 logged out. Waiting for processes to exit.
Oct 11 04:33:16 compute-0 systemd-logind[801]: Removed session 42.
Oct 11 04:33:17 compute-0 ceph-mon[74243]: pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:19 compute-0 ceph-mon[74243]: pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:21 compute-0 ceph-mon[74243]: pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:23 compute-0 sshd-session[130302]: Accepted publickey for zuul from 192.168.122.30 port 49064 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:33:23 compute-0 systemd-logind[801]: New session 43 of user zuul.
Oct 11 04:33:23 compute-0 systemd[1]: Started Session 43 of User zuul.
Oct 11 04:33:23 compute-0 sshd-session[130302]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:33:23 compute-0 ceph-mon[74243]: pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:24 compute-0 python3.9[130455]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:33:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:25 compute-0 ceph-mon[74243]: pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:25 compute-0 sudo[130609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmixxkkhxxinjccngjdrqcqghbbfnumu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157204.7722254-32-13408330803482/AnsiballZ_systemd.py'
Oct 11 04:33:25 compute-0 sudo[130609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:25 compute-0 python3.9[130611]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 11 04:33:25 compute-0 sudo[130609]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:33:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:33:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:33:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:33:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:33:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:33:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:26 compute-0 sudo[130763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgwfkimsnlaewkihwwlujlhtzaxrodoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157206.0416036-40-108503282083841/AnsiballZ_systemd.py'
Oct 11 04:33:26 compute-0 sudo[130763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:26 compute-0 python3.9[130765]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:33:26 compute-0 sudo[130763]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:27 compute-0 sudo[130916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thgozkocabffgxwywfbnpmourbqvjgyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157206.903243-49-201017626769448/AnsiballZ_command.py'
Oct 11 04:33:27 compute-0 sudo[130916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:27 compute-0 ceph-mon[74243]: pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:27 compute-0 python3.9[130918]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:33:27 compute-0 sudo[130916]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:28 compute-0 sudo[131069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mecaatuaodrgpkfzqyzfmsupqdgsnzow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157207.7746348-57-182413579794647/AnsiballZ_stat.py'
Oct 11 04:33:28 compute-0 sudo[131069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:28 compute-0 python3.9[131071]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:33:28 compute-0 sudo[131069]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:29 compute-0 sudo[131221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esmyyyintyynxbnoekrqrnediyvraxpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157208.7513034-66-17586704717757/AnsiballZ_file.py'
Oct 11 04:33:29 compute-0 sudo[131221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:29 compute-0 ceph-mon[74243]: pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:29 compute-0 python3.9[131223]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:33:29 compute-0 sudo[131221]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:29 compute-0 sshd-session[130305]: Connection closed by 192.168.122.30 port 49064
Oct 11 04:33:29 compute-0 sshd-session[130302]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:33:29 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Oct 11 04:33:29 compute-0 systemd[1]: session-43.scope: Consumed 4.040s CPU time.
Oct 11 04:33:29 compute-0 systemd-logind[801]: Session 43 logged out. Waiting for processes to exit.
Oct 11 04:33:29 compute-0 systemd-logind[801]: Removed session 43.
Oct 11 04:33:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:31 compute-0 ceph-mon[74243]: pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:33 compute-0 ceph-mon[74243]: pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:35 compute-0 sshd-session[131248]: Accepted publickey for zuul from 192.168.122.30 port 49230 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:33:35 compute-0 systemd-logind[801]: New session 44 of user zuul.
Oct 11 04:33:35 compute-0 systemd[1]: Started Session 44 of User zuul.
Oct 11 04:33:35 compute-0 sshd-session[131248]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:33:35 compute-0 ceph-mon[74243]: pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:36 compute-0 python3.9[131401]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:33:37 compute-0 sudo[131555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ambwcosnkauhhmxcpmimkargqadduzre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157216.9981506-34-119822319801667/AnsiballZ_setup.py'
Oct 11 04:33:37 compute-0 sudo[131555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:37 compute-0 ceph-mon[74243]: pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:37 compute-0 python3.9[131557]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:33:37 compute-0 sudo[131555]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:38 compute-0 sudo[131639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkvrjwujeijwfwlosbtmsfmgqeweodea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157216.9981506-34-119822319801667/AnsiballZ_dnf.py'
Oct 11 04:33:38 compute-0 sudo[131639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:38 compute-0 python3.9[131641]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 11 04:33:39 compute-0 ceph-mon[74243]: pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:39 compute-0 sudo[131639]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:40 compute-0 sshd-session[70491]: Received disconnect from 38.102.83.192 port 59836:11: disconnected by user
Oct 11 04:33:40 compute-0 sshd-session[70491]: Disconnected from user zuul 38.102.83.192 port 59836
Oct 11 04:33:40 compute-0 sshd-session[70488]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:33:40 compute-0 systemd-logind[801]: Session 19 logged out. Waiting for processes to exit.
Oct 11 04:33:40 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Oct 11 04:33:40 compute-0 systemd[1]: session-19.scope: Consumed 1min 31.379s CPU time.
Oct 11 04:33:40 compute-0 systemd-logind[801]: Removed session 19.
Oct 11 04:33:40 compute-0 python3.9[131792]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:33:40 compute-0 sudo[131793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:33:40 compute-0 sudo[131793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:33:40 compute-0 sudo[131793]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:40 compute-0 sudo[131819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:33:40 compute-0 sudo[131819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:33:40 compute-0 sudo[131819]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:40 compute-0 sudo[131844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:33:40 compute-0 sudo[131844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:33:40 compute-0 sudo[131844]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:41 compute-0 sudo[131869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:33:41 compute-0 sudo[131869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:33:41 compute-0 ceph-mon[74243]: pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:41 compute-0 sudo[131869]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:41 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:33:41 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:33:41 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:33:41 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:33:41 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:33:41 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:33:41 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 39b40d53-43a6-40d3-bd10-9e8e8681070b does not exist
Oct 11 04:33:41 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 12c9383c-1bc2-47ea-b011-e9591e5ee0e8 does not exist
Oct 11 04:33:41 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 3b5dbcc2-1bf2-4812-9489-43a04d73cf64 does not exist
Oct 11 04:33:41 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:33:41 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:33:41 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:33:41 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:33:41 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:33:41 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:33:41 compute-0 sudo[131980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:33:41 compute-0 sudo[131980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:33:41 compute-0 sudo[131980]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:41 compute-0 sudo[132026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:33:41 compute-0 sudo[132026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:33:41 compute-0 sudo[132026]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:41 compute-0 sudo[132051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:33:41 compute-0 sudo[132051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:33:41 compute-0 sudo[132051]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:41 compute-0 sudo[132076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:33:41 compute-0 sudo[132076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:33:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:42 compute-0 podman[132211]: 2025-10-11 04:33:42.244884479 +0000 UTC m=+0.040693883 container create d6fa68c6f2b262919e86a234d58a55164d5127e554becd2c64524503c73393f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:33:42 compute-0 systemd[1]: Started libpod-conmon-d6fa68c6f2b262919e86a234d58a55164d5127e554becd2c64524503c73393f4.scope.
Oct 11 04:33:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:33:42 compute-0 podman[132211]: 2025-10-11 04:33:42.225916935 +0000 UTC m=+0.021726379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:33:42 compute-0 podman[132211]: 2025-10-11 04:33:42.328643528 +0000 UTC m=+0.124452932 container init d6fa68c6f2b262919e86a234d58a55164d5127e554becd2c64524503c73393f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_torvalds, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:33:42 compute-0 python3.9[132204]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 04:33:42 compute-0 podman[132211]: 2025-10-11 04:33:42.339133774 +0000 UTC m=+0.134943168 container start d6fa68c6f2b262919e86a234d58a55164d5127e554becd2c64524503c73393f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:33:42 compute-0 podman[132211]: 2025-10-11 04:33:42.343058675 +0000 UTC m=+0.138868089 container attach d6fa68c6f2b262919e86a234d58a55164d5127e554becd2c64524503c73393f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_torvalds, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:33:42 compute-0 interesting_torvalds[132228]: 167 167
Oct 11 04:33:42 compute-0 systemd[1]: libpod-d6fa68c6f2b262919e86a234d58a55164d5127e554becd2c64524503c73393f4.scope: Deactivated successfully.
Oct 11 04:33:42 compute-0 podman[132211]: 2025-10-11 04:33:42.346025965 +0000 UTC m=+0.141835359 container died d6fa68c6f2b262919e86a234d58a55164d5127e554becd2c64524503c73393f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:33:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3719ee52cdee7ef496a5062f35bc2c1e775af4b4fa2d0f7da99ab4cca87f9b7b-merged.mount: Deactivated successfully.
Oct 11 04:33:42 compute-0 podman[132211]: 2025-10-11 04:33:42.388784185 +0000 UTC m=+0.184593579 container remove d6fa68c6f2b262919e86a234d58a55164d5127e554becd2c64524503c73393f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:33:42 compute-0 systemd[1]: libpod-conmon-d6fa68c6f2b262919e86a234d58a55164d5127e554becd2c64524503c73393f4.scope: Deactivated successfully.
Oct 11 04:33:42 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:33:42 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:33:42 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:33:42 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:33:42 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:33:42 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:33:42 compute-0 podman[132279]: 2025-10-11 04:33:42.550991219 +0000 UTC m=+0.063425374 container create f65aeccf751779edd7bbdfe12a96843d8543494a8bb85bf9aff822f23fc2bf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chatelet, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:33:42 compute-0 systemd[1]: Started libpod-conmon-f65aeccf751779edd7bbdfe12a96843d8543494a8bb85bf9aff822f23fc2bf4a.scope.
Oct 11 04:33:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c1ffa1f0b9cf9c2728697a66aaae9e18c327c85176ff6e8d12d6d0823a140b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:42 compute-0 podman[132279]: 2025-10-11 04:33:42.525993314 +0000 UTC m=+0.038427559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c1ffa1f0b9cf9c2728697a66aaae9e18c327c85176ff6e8d12d6d0823a140b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c1ffa1f0b9cf9c2728697a66aaae9e18c327c85176ff6e8d12d6d0823a140b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c1ffa1f0b9cf9c2728697a66aaae9e18c327c85176ff6e8d12d6d0823a140b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c1ffa1f0b9cf9c2728697a66aaae9e18c327c85176ff6e8d12d6d0823a140b3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:42 compute-0 podman[132279]: 2025-10-11 04:33:42.635569837 +0000 UTC m=+0.148003992 container init f65aeccf751779edd7bbdfe12a96843d8543494a8bb85bf9aff822f23fc2bf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chatelet, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:33:42 compute-0 podman[132279]: 2025-10-11 04:33:42.644313892 +0000 UTC m=+0.156748047 container start f65aeccf751779edd7bbdfe12a96843d8543494a8bb85bf9aff822f23fc2bf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:33:42 compute-0 podman[132279]: 2025-10-11 04:33:42.647190579 +0000 UTC m=+0.159624764 container attach f65aeccf751779edd7bbdfe12a96843d8543494a8bb85bf9aff822f23fc2bf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:33:43 compute-0 python3.9[132421]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:33:43 compute-0 ceph-mon[74243]: pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:43 compute-0 mystifying_chatelet[132343]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:33:43 compute-0 mystifying_chatelet[132343]: --> relative data size: 1.0
Oct 11 04:33:43 compute-0 mystifying_chatelet[132343]: --> All data devices are unavailable
Oct 11 04:33:43 compute-0 systemd[1]: libpod-f65aeccf751779edd7bbdfe12a96843d8543494a8bb85bf9aff822f23fc2bf4a.scope: Deactivated successfully.
Oct 11 04:33:43 compute-0 systemd[1]: libpod-f65aeccf751779edd7bbdfe12a96843d8543494a8bb85bf9aff822f23fc2bf4a.scope: Consumed 1.036s CPU time.
Oct 11 04:33:43 compute-0 podman[132279]: 2025-10-11 04:33:43.803830653 +0000 UTC m=+1.316264818 container died f65aeccf751779edd7bbdfe12a96843d8543494a8bb85bf9aff822f23fc2bf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:33:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c1ffa1f0b9cf9c2728697a66aaae9e18c327c85176ff6e8d12d6d0823a140b3-merged.mount: Deactivated successfully.
Oct 11 04:33:43 compute-0 podman[132279]: 2025-10-11 04:33:43.88452159 +0000 UTC m=+1.396955775 container remove f65aeccf751779edd7bbdfe12a96843d8543494a8bb85bf9aff822f23fc2bf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chatelet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:33:43 compute-0 systemd[1]: libpod-conmon-f65aeccf751779edd7bbdfe12a96843d8543494a8bb85bf9aff822f23fc2bf4a.scope: Deactivated successfully.
Oct 11 04:33:43 compute-0 sudo[132076]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:43 compute-0 python3.9[132593]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:33:44 compute-0 sudo[132606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:33:44 compute-0 sudo[132606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:33:44 compute-0 sudo[132606]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:44 compute-0 sudo[132652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:33:44 compute-0 sudo[132652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:33:44 compute-0 sudo[132652]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:44 compute-0 sudo[132680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:33:44 compute-0 sudo[132680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:33:44 compute-0 sudo[132680]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:44 compute-0 sudo[132705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:33:44 compute-0 sudo[132705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:33:44 compute-0 sshd-session[131251]: Connection closed by 192.168.122.30 port 49230
Oct 11 04:33:44 compute-0 sshd-session[131248]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:33:44 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Oct 11 04:33:44 compute-0 systemd[1]: session-44.scope: Consumed 6.200s CPU time.
Oct 11 04:33:44 compute-0 systemd-logind[801]: Session 44 logged out. Waiting for processes to exit.
Oct 11 04:33:44 compute-0 systemd-logind[801]: Removed session 44.
Oct 11 04:33:44 compute-0 ceph-mon[74243]: pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:44 compute-0 podman[132770]: 2025-10-11 04:33:44.757807647 +0000 UTC m=+0.070121681 container create eb91f79549d46ffc5eb55185622991a8b2746fc3366a4b94a78a5d2869f5bae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lovelace, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:33:44 compute-0 systemd[1]: Started libpod-conmon-eb91f79549d46ffc5eb55185622991a8b2746fc3366a4b94a78a5d2869f5bae9.scope.
Oct 11 04:33:44 compute-0 podman[132770]: 2025-10-11 04:33:44.726615807 +0000 UTC m=+0.038929901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:33:44 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:33:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:44 compute-0 podman[132770]: 2025-10-11 04:33:44.865130877 +0000 UTC m=+0.177444981 container init eb91f79549d46ffc5eb55185622991a8b2746fc3366a4b94a78a5d2869f5bae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lovelace, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:33:44 compute-0 podman[132770]: 2025-10-11 04:33:44.878022619 +0000 UTC m=+0.190336653 container start eb91f79549d46ffc5eb55185622991a8b2746fc3366a4b94a78a5d2869f5bae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lovelace, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:33:44 compute-0 podman[132770]: 2025-10-11 04:33:44.882711829 +0000 UTC m=+0.195025923 container attach eb91f79549d46ffc5eb55185622991a8b2746fc3366a4b94a78a5d2869f5bae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:33:44 compute-0 admiring_lovelace[132787]: 167 167
Oct 11 04:33:44 compute-0 systemd[1]: libpod-eb91f79549d46ffc5eb55185622991a8b2746fc3366a4b94a78a5d2869f5bae9.scope: Deactivated successfully.
Oct 11 04:33:44 compute-0 podman[132770]: 2025-10-11 04:33:44.886522208 +0000 UTC m=+0.198836222 container died eb91f79549d46ffc5eb55185622991a8b2746fc3366a4b94a78a5d2869f5bae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:33:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b575b5675dd65682762dd4adec3481ea33d472ff04efd35232921ed862b7b33f-merged.mount: Deactivated successfully.
Oct 11 04:33:44 compute-0 podman[132770]: 2025-10-11 04:33:44.947423932 +0000 UTC m=+0.259737976 container remove eb91f79549d46ffc5eb55185622991a8b2746fc3366a4b94a78a5d2869f5bae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:33:44 compute-0 systemd[1]: libpod-conmon-eb91f79549d46ffc5eb55185622991a8b2746fc3366a4b94a78a5d2869f5bae9.scope: Deactivated successfully.
Oct 11 04:33:45 compute-0 podman[132812]: 2025-10-11 04:33:45.178542118 +0000 UTC m=+0.056294827 container create 8012347a47494e1ef92c9e372284c784b1d8b0485bc0856ec1b9a7c82e079b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nobel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:33:45 compute-0 systemd[1]: Started libpod-conmon-8012347a47494e1ef92c9e372284c784b1d8b0485bc0856ec1b9a7c82e079b99.scope.
Oct 11 04:33:45 compute-0 podman[132812]: 2025-10-11 04:33:45.158980551 +0000 UTC m=+0.036733300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:33:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425f47d6799718408ee85d1f6516826255a60cfe643178031ab830d355608f34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425f47d6799718408ee85d1f6516826255a60cfe643178031ab830d355608f34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425f47d6799718408ee85d1f6516826255a60cfe643178031ab830d355608f34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425f47d6799718408ee85d1f6516826255a60cfe643178031ab830d355608f34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:45 compute-0 podman[132812]: 2025-10-11 04:33:45.27268076 +0000 UTC m=+0.150433479 container init 8012347a47494e1ef92c9e372284c784b1d8b0485bc0856ec1b9a7c82e079b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:33:45 compute-0 podman[132812]: 2025-10-11 04:33:45.285823738 +0000 UTC m=+0.163576437 container start 8012347a47494e1ef92c9e372284c784b1d8b0485bc0856ec1b9a7c82e079b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nobel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:33:45 compute-0 podman[132812]: 2025-10-11 04:33:45.289624797 +0000 UTC m=+0.167377496 container attach 8012347a47494e1ef92c9e372284c784b1d8b0485bc0856ec1b9a7c82e079b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nobel, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]: {
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:     "0": [
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:         {
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "devices": [
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "/dev/loop3"
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             ],
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "lv_name": "ceph_lv0",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "lv_size": "21470642176",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "name": "ceph_lv0",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "tags": {
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.cluster_name": "ceph",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.crush_device_class": "",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.encrypted": "0",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.osd_id": "0",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.type": "block",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.vdo": "0"
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             },
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "type": "block",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "vg_name": "ceph_vg0"
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:         }
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:     ],
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:     "1": [
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:         {
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "devices": [
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "/dev/loop4"
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             ],
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "lv_name": "ceph_lv1",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "lv_size": "21470642176",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "name": "ceph_lv1",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "tags": {
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.cluster_name": "ceph",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.crush_device_class": "",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.encrypted": "0",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.osd_id": "1",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.type": "block",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.vdo": "0"
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             },
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "type": "block",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "vg_name": "ceph_vg1"
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:         }
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:     ],
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:     "2": [
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:         {
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "devices": [
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "/dev/loop5"
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             ],
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "lv_name": "ceph_lv2",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "lv_size": "21470642176",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "name": "ceph_lv2",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "tags": {
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.cluster_name": "ceph",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.crush_device_class": "",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.encrypted": "0",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.osd_id": "2",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.type": "block",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:                 "ceph.vdo": "0"
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             },
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "type": "block",
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:             "vg_name": "ceph_vg2"
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:         }
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]:     ]
Oct 11 04:33:46 compute-0 thirsty_nobel[132829]: }
Oct 11 04:33:46 compute-0 systemd[1]: libpod-8012347a47494e1ef92c9e372284c784b1d8b0485bc0856ec1b9a7c82e079b99.scope: Deactivated successfully.
Oct 11 04:33:46 compute-0 podman[132812]: 2025-10-11 04:33:46.068772542 +0000 UTC m=+0.946525271 container died 8012347a47494e1ef92c9e372284c784b1d8b0485bc0856ec1b9a7c82e079b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nobel, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:33:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-425f47d6799718408ee85d1f6516826255a60cfe643178031ab830d355608f34-merged.mount: Deactivated successfully.
Oct 11 04:33:46 compute-0 podman[132812]: 2025-10-11 04:33:46.153021292 +0000 UTC m=+1.030774021 container remove 8012347a47494e1ef92c9e372284c784b1d8b0485bc0856ec1b9a7c82e079b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nobel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:33:46 compute-0 systemd[1]: libpod-conmon-8012347a47494e1ef92c9e372284c784b1d8b0485bc0856ec1b9a7c82e079b99.scope: Deactivated successfully.
Oct 11 04:33:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:46 compute-0 sudo[132705]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:46 compute-0 sudo[132848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:33:46 compute-0 sudo[132848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:33:46 compute-0 sudo[132848]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:46 compute-0 sudo[132873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:33:46 compute-0 sudo[132873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:33:46 compute-0 sudo[132873]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:46 compute-0 sudo[132898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:33:46 compute-0 sudo[132898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:33:46 compute-0 sudo[132898]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:46 compute-0 sudo[132923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:33:46 compute-0 sudo[132923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:33:47 compute-0 podman[132990]: 2025-10-11 04:33:47.02768292 +0000 UTC m=+0.059158445 container create a23612b95ce766ef65812e70f0f6f432ba4f1ad04e28248838a7e19a3b59935d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khorana, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:33:47 compute-0 systemd[1]: Started libpod-conmon-a23612b95ce766ef65812e70f0f6f432ba4f1ad04e28248838a7e19a3b59935d.scope.
Oct 11 04:33:47 compute-0 podman[132990]: 2025-10-11 04:33:46.997020153 +0000 UTC m=+0.028495758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:33:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:33:47 compute-0 podman[132990]: 2025-10-11 04:33:47.121230468 +0000 UTC m=+0.152706023 container init a23612b95ce766ef65812e70f0f6f432ba4f1ad04e28248838a7e19a3b59935d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khorana, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:33:47 compute-0 podman[132990]: 2025-10-11 04:33:47.131371676 +0000 UTC m=+0.162847231 container start a23612b95ce766ef65812e70f0f6f432ba4f1ad04e28248838a7e19a3b59935d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khorana, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:33:47 compute-0 jolly_khorana[133006]: 167 167
Oct 11 04:33:47 compute-0 podman[132990]: 2025-10-11 04:33:47.136413443 +0000 UTC m=+0.167888998 container attach a23612b95ce766ef65812e70f0f6f432ba4f1ad04e28248838a7e19a3b59935d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khorana, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:33:47 compute-0 systemd[1]: libpod-a23612b95ce766ef65812e70f0f6f432ba4f1ad04e28248838a7e19a3b59935d.scope: Deactivated successfully.
Oct 11 04:33:47 compute-0 podman[132990]: 2025-10-11 04:33:47.13712859 +0000 UTC m=+0.168604115 container died a23612b95ce766ef65812e70f0f6f432ba4f1ad04e28248838a7e19a3b59935d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:33:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-bda03fb3aaafd5fc4d21543fe08d3b51117439faf7c8169c79892021097a6fbe-merged.mount: Deactivated successfully.
Oct 11 04:33:47 compute-0 podman[132990]: 2025-10-11 04:33:47.17345954 +0000 UTC m=+0.204935065 container remove a23612b95ce766ef65812e70f0f6f432ba4f1ad04e28248838a7e19a3b59935d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khorana, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:33:47 compute-0 systemd[1]: libpod-conmon-a23612b95ce766ef65812e70f0f6f432ba4f1ad04e28248838a7e19a3b59935d.scope: Deactivated successfully.
Oct 11 04:33:47 compute-0 ceph-mon[74243]: pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:47 compute-0 podman[133029]: 2025-10-11 04:33:47.39105556 +0000 UTC m=+0.055471449 container create 286682ffad2d3158258651094856bdc1eea7affacdde3f438821fdbc7c4ba03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:33:47 compute-0 systemd[1]: Started libpod-conmon-286682ffad2d3158258651094856bdc1eea7affacdde3f438821fdbc7c4ba03a.scope.
Oct 11 04:33:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:33:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07503665f9dc92d9ab2950981f523e3b61cdfe6231532cdb51e04baf2f07a36e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07503665f9dc92d9ab2950981f523e3b61cdfe6231532cdb51e04baf2f07a36e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07503665f9dc92d9ab2950981f523e3b61cdfe6231532cdb51e04baf2f07a36e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07503665f9dc92d9ab2950981f523e3b61cdfe6231532cdb51e04baf2f07a36e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:47 compute-0 podman[133029]: 2025-10-11 04:33:47.464068337 +0000 UTC m=+0.128484256 container init 286682ffad2d3158258651094856bdc1eea7affacdde3f438821fdbc7c4ba03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:33:47 compute-0 podman[133029]: 2025-10-11 04:33:47.3718407 +0000 UTC m=+0.036256619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:33:47 compute-0 podman[133029]: 2025-10-11 04:33:47.470240982 +0000 UTC m=+0.134656891 container start 286682ffad2d3158258651094856bdc1eea7affacdde3f438821fdbc7c4ba03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:33:47 compute-0 podman[133029]: 2025-10-11 04:33:47.474695216 +0000 UTC m=+0.139111185 container attach 286682ffad2d3158258651094856bdc1eea7affacdde3f438821fdbc7c4ba03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:33:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:48 compute-0 nice_kepler[133047]: {
Oct 11 04:33:48 compute-0 nice_kepler[133047]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:33:48 compute-0 nice_kepler[133047]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:33:48 compute-0 nice_kepler[133047]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:33:48 compute-0 nice_kepler[133047]:         "osd_id": 1,
Oct 11 04:33:48 compute-0 nice_kepler[133047]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:33:48 compute-0 nice_kepler[133047]:         "type": "bluestore"
Oct 11 04:33:48 compute-0 nice_kepler[133047]:     },
Oct 11 04:33:48 compute-0 nice_kepler[133047]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:33:48 compute-0 nice_kepler[133047]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:33:48 compute-0 nice_kepler[133047]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:33:48 compute-0 nice_kepler[133047]:         "osd_id": 0,
Oct 11 04:33:48 compute-0 nice_kepler[133047]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:33:48 compute-0 nice_kepler[133047]:         "type": "bluestore"
Oct 11 04:33:48 compute-0 nice_kepler[133047]:     },
Oct 11 04:33:48 compute-0 nice_kepler[133047]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:33:48 compute-0 nice_kepler[133047]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:33:48 compute-0 nice_kepler[133047]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:33:48 compute-0 nice_kepler[133047]:         "osd_id": 2,
Oct 11 04:33:48 compute-0 nice_kepler[133047]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:33:48 compute-0 nice_kepler[133047]:         "type": "bluestore"
Oct 11 04:33:48 compute-0 nice_kepler[133047]:     }
Oct 11 04:33:48 compute-0 nice_kepler[133047]: }
Oct 11 04:33:48 compute-0 systemd[1]: libpod-286682ffad2d3158258651094856bdc1eea7affacdde3f438821fdbc7c4ba03a.scope: Deactivated successfully.
Oct 11 04:33:48 compute-0 systemd[1]: libpod-286682ffad2d3158258651094856bdc1eea7affacdde3f438821fdbc7c4ba03a.scope: Consumed 1.023s CPU time.
Oct 11 04:33:48 compute-0 podman[133080]: 2025-10-11 04:33:48.545710818 +0000 UTC m=+0.036317971 container died 286682ffad2d3158258651094856bdc1eea7affacdde3f438821fdbc7c4ba03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:33:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-07503665f9dc92d9ab2950981f523e3b61cdfe6231532cdb51e04baf2f07a36e-merged.mount: Deactivated successfully.
Oct 11 04:33:48 compute-0 podman[133080]: 2025-10-11 04:33:48.609964361 +0000 UTC m=+0.100571484 container remove 286682ffad2d3158258651094856bdc1eea7affacdde3f438821fdbc7c4ba03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:33:48 compute-0 systemd[1]: libpod-conmon-286682ffad2d3158258651094856bdc1eea7affacdde3f438821fdbc7c4ba03a.scope: Deactivated successfully.
Oct 11 04:33:48 compute-0 sudo[132923]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:33:48 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:33:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:33:48 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:33:48 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 9216002d-5e1a-4a2d-a5ba-e6ca2b999bf8 does not exist
Oct 11 04:33:48 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev c4045bab-1df5-46e8-ba05-86fa86e1ba08 does not exist
Oct 11 04:33:48 compute-0 sudo[133095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:33:48 compute-0 sudo[133095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:33:48 compute-0 sudo[133095]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:48 compute-0 sudo[133120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:33:48 compute-0 sudo[133120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:33:48 compute-0 sudo[133120]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:49 compute-0 ceph-mon[74243]: pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:49 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:33:49 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:33:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:50 compute-0 sshd-session[133145]: Accepted publickey for zuul from 192.168.122.30 port 55564 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:33:50 compute-0 systemd-logind[801]: New session 45 of user zuul.
Oct 11 04:33:50 compute-0 systemd[1]: Started Session 45 of User zuul.
Oct 11 04:33:50 compute-0 sshd-session[133145]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:33:51 compute-0 ceph-mon[74243]: pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:51 compute-0 python3.9[133298]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:33:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:53 compute-0 sudo[133452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnvdryxpmmvrcksfcptsmsaeidntpzqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157232.6660028-50-249818564953872/AnsiballZ_file.py'
Oct 11 04:33:53 compute-0 sudo[133452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:53 compute-0 ceph-mon[74243]: pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:53 compute-0 python3.9[133454]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:33:53 compute-0 sudo[133452]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:54 compute-0 sudo[133604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxgjzcwpxxkqohfmhuusleqbrabtfwze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157233.6782403-50-167843481858072/AnsiballZ_file.py'
Oct 11 04:33:54 compute-0 sudo[133604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:54 compute-0 python3.9[133606]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:33:54 compute-0 sudo[133604]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:55 compute-0 sudo[133756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egpbutzteciabnqkxtyzhrszrdhgydyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157234.5135753-65-67786515293414/AnsiballZ_stat.py'
Oct 11 04:33:55 compute-0 sudo[133756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:55 compute-0 ceph-mon[74243]: pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:55 compute-0 python3.9[133758]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:33:55 compute-0 sudo[133756]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:55 compute-0 sudo[133879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgdipuocdzeklkdoognxkfykhgbxeyfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157234.5135753-65-67786515293414/AnsiballZ_copy.py'
Oct 11 04:33:55 compute-0 sudo[133879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:33:56
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.meta', '.rgw.root']
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:33:56 compute-0 python3.9[133881]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157234.5135753-65-67786515293414/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=1c365e50c2e90ecc77a91005b19ebd60a672df7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:33:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:56 compute-0 sudo[133879]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:56 compute-0 sudo[134031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozciboojwrqpknipyuhfckqbvzteiutl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157236.382252-65-137570939595768/AnsiballZ_stat.py'
Oct 11 04:33:56 compute-0 sudo[134031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:56 compute-0 python3.9[134033]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:33:56 compute-0 sudo[134031]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:57 compute-0 ceph-mon[74243]: pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:57 compute-0 sudo[134154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inzaujzfvdndeccubianzpqjnxdvogqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157236.382252-65-137570939595768/AnsiballZ_copy.py'
Oct 11 04:33:57 compute-0 sudo[134154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:57 compute-0 python3.9[134156]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157236.382252-65-137570939595768/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=9c3a15f9a790bdfc1719edba376626a550a13bf1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:33:57 compute-0 sudo[134154]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:58 compute-0 sudo[134306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qisacrrczkblmbmcyuqzxvpvonghusrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157237.8514795-65-199607697902811/AnsiballZ_stat.py'
Oct 11 04:33:58 compute-0 sudo[134306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:58 compute-0 python3.9[134308]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:33:58 compute-0 sudo[134306]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:58 compute-0 sudo[134429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqmeyplgladzdlzacqvvsbhpeqizvuhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157237.8514795-65-199607697902811/AnsiballZ_copy.py'
Oct 11 04:33:58 compute-0 sudo[134429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:59 compute-0 python3.9[134431]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157237.8514795-65-199607697902811/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0a64f083d8a6f7209aec79955bca9ceebd435fd0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:33:59 compute-0 sudo[134429]: pam_unix(sudo:session): session closed for user root
Oct 11 04:33:59 compute-0 ceph-mon[74243]: pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:59 compute-0 sudo[134581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvnnqfemnsozqjozlhnfhspaqkrhvovd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157239.3405967-109-208358312616656/AnsiballZ_file.py'
Oct 11 04:33:59 compute-0 sudo[134581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:33:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:59 compute-0 python3.9[134583]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:33:59 compute-0 sudo[134581]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:00 compute-0 sudo[134733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adbxqondylumtdxwbvdxkyjfbuizgqdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157240.1556334-109-210347269746991/AnsiballZ_file.py'
Oct 11 04:34:00 compute-0 sudo[134733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:00 compute-0 python3.9[134735]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:34:00 compute-0 sudo[134733]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:01 compute-0 ceph-mon[74243]: pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:01 compute-0 sudo[134885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kimwolsaydkmijiyqfjoqvpjwpmubwmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157241.0252624-124-13047779374055/AnsiballZ_stat.py'
Oct 11 04:34:01 compute-0 sudo[134885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:01 compute-0 python3.9[134887]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:34:01 compute-0 sudo[134885]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:02 compute-0 sudo[135008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osjnwsnqdbxuomcdtiaswiltdmxiclcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157241.0252624-124-13047779374055/AnsiballZ_copy.py'
Oct 11 04:34:02 compute-0 sudo[135008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:02 compute-0 python3.9[135010]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157241.0252624-124-13047779374055/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=de3b35f7c346313f88efa3a6eb8e08164cc4e0e5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:34:02 compute-0 sudo[135008]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:02 compute-0 sudo[135160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsnlbbnepyblzivtycdlqhszpjbjseie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157242.4476612-124-38754560003829/AnsiballZ_stat.py'
Oct 11 04:34:02 compute-0 sudo[135160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:03 compute-0 python3.9[135162]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:34:03 compute-0 sudo[135160]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:03 compute-0 ceph-mon[74243]: pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:03 compute-0 sudo[135283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzutzhljzlbnvhtvadqjdvnaswuvjzeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157242.4476612-124-38754560003829/AnsiballZ_copy.py'
Oct 11 04:34:03 compute-0 sudo[135283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:03 compute-0 python3.9[135285]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157242.4476612-124-38754560003829/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ff51573d0059d940dedffef40be61ba9f48c28c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:34:03 compute-0 sudo[135283]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:04 compute-0 sudo[135435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epthxlammnsgxxxsrmxdieretdyhlfox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157243.8265057-124-185659981176110/AnsiballZ_stat.py'
Oct 11 04:34:04 compute-0 sudo[135435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:04 compute-0 python3.9[135437]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:34:04 compute-0 sudo[135435]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:04 compute-0 sudo[135558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blohsuyaamumqqxbrtjyywzlugallwmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157243.8265057-124-185659981176110/AnsiballZ_copy.py'
Oct 11 04:34:04 compute-0 sudo[135558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:04 compute-0 python3.9[135560]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157243.8265057-124-185659981176110/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=132456f631a7c85448743d2a36246c37c1ec5041 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:34:05 compute-0 sudo[135558]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:05 compute-0 ceph-mon[74243]: pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:05 compute-0 sudo[135710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbvinujdioodienqmkrolfgthqfglxqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157245.2667513-168-179376664677278/AnsiballZ_file.py'
Oct 11 04:34:05 compute-0 sudo[135710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:05 compute-0 python3.9[135712]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:34:05 compute-0 sudo[135710]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:34:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:06 compute-0 sudo[135862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrkmycnrngdubjzmjahzniwdenrshzid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157245.928605-168-121920919025112/AnsiballZ_file.py'
Oct 11 04:34:06 compute-0 sudo[135862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:06 compute-0 python3.9[135864]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:34:06 compute-0 sudo[135862]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:06 compute-0 sudo[136014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpckcphdauahtlxfikxhfyitxwgmicbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157246.627304-183-19908862825274/AnsiballZ_stat.py'
Oct 11 04:34:06 compute-0 sudo[136014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:07 compute-0 python3.9[136016]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:34:07 compute-0 sudo[136014]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:07 compute-0 ceph-mon[74243]: pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:07 compute-0 sudo[136137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhjtujelbekeumcyeqnfygvoojvczujc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157246.627304-183-19908862825274/AnsiballZ_copy.py'
Oct 11 04:34:07 compute-0 sudo[136137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:07 compute-0 python3.9[136139]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157246.627304-183-19908862825274/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b5acfa3845e08a1cce9b0ab061dcbe53839d9c43 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:34:07 compute-0 sudo[136137]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:08 compute-0 sudo[136289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpogvxdcwxxmzgoqpufylccetwfgxqmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157248.0188122-183-56294449931150/AnsiballZ_stat.py'
Oct 11 04:34:08 compute-0 sudo[136289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:08 compute-0 python3.9[136291]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:34:08 compute-0 sudo[136289]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:08 compute-0 sudo[136412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjyskxsgtpknjxljnsabyszhdqirnqjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157248.0188122-183-56294449931150/AnsiballZ_copy.py'
Oct 11 04:34:08 compute-0 sudo[136412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:09 compute-0 python3.9[136414]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157248.0188122-183-56294449931150/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ff51573d0059d940dedffef40be61ba9f48c28c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:34:09 compute-0 sudo[136412]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:09 compute-0 ceph-mon[74243]: pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:09 compute-0 sudo[136564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zexbolbyxwgfvkilkrccyatspgfoeubh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157249.3636372-183-255877827337339/AnsiballZ_stat.py'
Oct 11 04:34:09 compute-0 sudo[136564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:09 compute-0 python3.9[136566]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:34:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:09 compute-0 sudo[136564]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:10 compute-0 sudo[136687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmlkrgtehdmqnarcebdwslvfmnycqzzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157249.3636372-183-255877827337339/AnsiballZ_copy.py'
Oct 11 04:34:10 compute-0 sudo[136687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:10 compute-0 python3.9[136689]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157249.3636372-183-255877827337339/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=c3de52aa782075b8d171b21ab678eb5f56e21968 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:34:10 compute-0 sudo[136687]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:11 compute-0 ceph-mon[74243]: pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:11 compute-0 sudo[136839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utfbnpmbzpovpmjrqeupwpvvwgctnhbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157251.3203719-243-122167991221174/AnsiballZ_file.py'
Oct 11 04:34:11 compute-0 sudo[136839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:11 compute-0 python3.9[136841]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:34:11 compute-0 sudo[136839]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:12 compute-0 sudo[136991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htdlhojpwkpzoiikzodghvsaswgxxswe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157252.0825236-251-27209522074888/AnsiballZ_stat.py'
Oct 11 04:34:12 compute-0 sudo[136991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:12 compute-0 python3.9[136993]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:34:12 compute-0 sudo[136991]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:13 compute-0 sudo[137114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zydknpktmxyohbiweiudltwabmwefvnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157252.0825236-251-27209522074888/AnsiballZ_copy.py'
Oct 11 04:34:13 compute-0 sudo[137114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:13 compute-0 python3.9[137116]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157252.0825236-251-27209522074888/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=520cbc50dc10709fc8e2e65b75dc4f8582e9dfa8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:34:13 compute-0 sudo[137114]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:13 compute-0 ceph-mon[74243]: pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:14 compute-0 sudo[137266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slifjwcoxcoeahgfamxizlulyzvskwql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157253.5909553-267-196946995897130/AnsiballZ_file.py'
Oct 11 04:34:14 compute-0 sudo[137266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:14 compute-0 python3.9[137268]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:34:14 compute-0 sudo[137266]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:14 compute-0 sudo[137418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtnzmbtluhigzqgqbvyoturnyibubmoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157254.481966-275-28499725187599/AnsiballZ_stat.py'
Oct 11 04:34:14 compute-0 sudo[137418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:15 compute-0 python3.9[137420]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:34:15 compute-0 sudo[137418]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:15 compute-0 ceph-mon[74243]: pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:15 compute-0 sudo[137541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcazzhwrveyyxqnxdgotaelitqswgchq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157254.481966-275-28499725187599/AnsiballZ_copy.py'
Oct 11 04:34:15 compute-0 sudo[137541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:16 compute-0 python3.9[137543]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157254.481966-275-28499725187599/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=520cbc50dc10709fc8e2e65b75dc4f8582e9dfa8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:34:16 compute-0 sudo[137541]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:16 compute-0 sudo[137693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymjicfoduxxjfhkultwjyxtddovxzyeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157256.312173-291-255823006688313/AnsiballZ_file.py'
Oct 11 04:34:16 compute-0 sudo[137693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:16 compute-0 python3.9[137695]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:34:16 compute-0 sudo[137693]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:17 compute-0 ceph-mon[74243]: pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:17 compute-0 sudo[137845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtauyuvhrbschxpvjgmsbcfzghpavwac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157257.1513002-299-154140041759467/AnsiballZ_stat.py'
Oct 11 04:34:17 compute-0 sudo[137845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:17 compute-0 python3.9[137847]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:34:17 compute-0 sudo[137845]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:18 compute-0 sudo[137968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqmukpbtuqiukkjfpmctcoibaywjxxae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157257.1513002-299-154140041759467/AnsiballZ_copy.py'
Oct 11 04:34:18 compute-0 sudo[137968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:18 compute-0 python3.9[137970]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157257.1513002-299-154140041759467/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=520cbc50dc10709fc8e2e65b75dc4f8582e9dfa8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:34:18 compute-0 sudo[137968]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:18 compute-0 sudo[138120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sajhvpewhhgtfpjprkcqcyffgwoldczk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157258.5865104-315-4632199971704/AnsiballZ_file.py'
Oct 11 04:34:18 compute-0 sudo[138120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:19 compute-0 python3.9[138122]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:34:19 compute-0 sudo[138120]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:19 compute-0 ceph-mon[74243]: pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:19 compute-0 sudo[138272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwjlxgolqmtepvymurepzxzazsowhbuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157259.3241177-323-14217593898589/AnsiballZ_stat.py'
Oct 11 04:34:19 compute-0 sudo[138272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:19 compute-0 python3.9[138274]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:34:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:19 compute-0 sudo[138272]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:20 compute-0 sudo[138395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhcfugxbqegrofyqcwfubwncifwgjodx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157259.3241177-323-14217593898589/AnsiballZ_copy.py'
Oct 11 04:34:20 compute-0 sudo[138395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:20 compute-0 python3.9[138397]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157259.3241177-323-14217593898589/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=520cbc50dc10709fc8e2e65b75dc4f8582e9dfa8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:34:20 compute-0 sudo[138395]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:21 compute-0 sudo[138547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuuxldprxahpzwfpabehwnwplrdxoeak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157261.0006816-339-78951286529968/AnsiballZ_file.py'
Oct 11 04:34:21 compute-0 sudo[138547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:21 compute-0 ceph-mon[74243]: pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:21 compute-0 python3.9[138549]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:34:21 compute-0 sudo[138547]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:22 compute-0 sudo[138699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fccdhfjgltpgvovcyixgblxhfmlbinck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157261.7191882-347-51188823766105/AnsiballZ_stat.py'
Oct 11 04:34:22 compute-0 sudo[138699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:22 compute-0 python3.9[138701]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:34:22 compute-0 sudo[138699]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:22 compute-0 sudo[138822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynaiovfjczyvstheydzijhcvebslxwhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157261.7191882-347-51188823766105/AnsiballZ_copy.py'
Oct 11 04:34:22 compute-0 sudo[138822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:23 compute-0 python3.9[138824]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157261.7191882-347-51188823766105/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=520cbc50dc10709fc8e2e65b75dc4f8582e9dfa8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:34:23 compute-0 sudo[138822]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:23 compute-0 ceph-mon[74243]: pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:23 compute-0 sudo[138974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtizidxhutxtlscmlpgocdjmlrpqtkyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157263.2872694-363-5812248783687/AnsiballZ_file.py'
Oct 11 04:34:23 compute-0 sudo[138974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:23 compute-0 python3.9[138976]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:34:23 compute-0 sudo[138974]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:24 compute-0 sudo[139126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yakfupsvnicfwxuttqbrfctmymvxdtus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157264.0974855-371-53122574405046/AnsiballZ_stat.py'
Oct 11 04:34:24 compute-0 sudo[139126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:24 compute-0 python3.9[139128]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:34:24 compute-0 sudo[139126]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:25 compute-0 sudo[139249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiosjeagkdkkkiphrvjxuyokmphigtyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157264.0974855-371-53122574405046/AnsiballZ_copy.py'
Oct 11 04:34:25 compute-0 sudo[139249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:25 compute-0 python3.9[139251]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157264.0974855-371-53122574405046/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=520cbc50dc10709fc8e2e65b75dc4f8582e9dfa8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:34:25 compute-0 sudo[139249]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:25 compute-0 ceph-mon[74243]: pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:25 compute-0 sshd-session[133148]: Connection closed by 192.168.122.30 port 55564
Oct 11 04:34:25 compute-0 sshd-session[133145]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:34:25 compute-0 systemd-logind[801]: Session 45 logged out. Waiting for processes to exit.
Oct 11 04:34:25 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Oct 11 04:34:25 compute-0 systemd[1]: session-45.scope: Consumed 26.673s CPU time.
Oct 11 04:34:25 compute-0 systemd-logind[801]: Removed session 45.
Oct 11 04:34:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:34:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:34:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:34:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:34:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:34:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:34:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:27 compute-0 ceph-mon[74243]: pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:28 compute-0 ceph-mon[74243]: pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:31 compute-0 ceph-mon[74243]: pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:31 compute-0 sshd-session[139276]: Accepted publickey for zuul from 192.168.122.30 port 39418 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:34:31 compute-0 systemd-logind[801]: New session 46 of user zuul.
Oct 11 04:34:31 compute-0 systemd[1]: Started Session 46 of User zuul.
Oct 11 04:34:31 compute-0 sshd-session[139276]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:34:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:32 compute-0 sudo[139429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmharietutfbbugujsbzvyjotyvvfiil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157272.086765-22-142407365546086/AnsiballZ_file.py'
Oct 11 04:34:32 compute-0 sudo[139429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:32 compute-0 python3.9[139431]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:34:32 compute-0 sudo[139429]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:33 compute-0 ceph-mon[74243]: pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:33 compute-0 sudo[139581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mihvdnbufidcthlqalkjxhdcxhwwqnly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157273.1637616-34-227441047523461/AnsiballZ_stat.py'
Oct 11 04:34:33 compute-0 sudo[139581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:33 compute-0 python3.9[139583]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:34:33 compute-0 sudo[139581]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:34 compute-0 sudo[139704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiaplbeiczuxwoqqurvgfmykwdyymhyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157273.1637616-34-227441047523461/AnsiballZ_copy.py'
Oct 11 04:34:34 compute-0 sudo[139704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:34 compute-0 python3.9[139706]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760157273.1637616-34-227441047523461/.source.conf _original_basename=ceph.conf follow=False checksum=e7bba631a9071d3c6bb75ad73321a03e3a7607ae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:34:34 compute-0 sudo[139704]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:35 compute-0 ceph-mon[74243]: pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:35 compute-0 sudo[139856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cadmemfkmmeddmypxlgnimjswaurblra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157275.099871-34-29340565180354/AnsiballZ_stat.py'
Oct 11 04:34:35 compute-0 sudo[139856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:35 compute-0 python3.9[139858]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:34:35 compute-0 sudo[139856]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:36 compute-0 sudo[139979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smwwffymeyfsvydmhrvcoxcpmowyzyhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157275.099871-34-29340565180354/AnsiballZ_copy.py'
Oct 11 04:34:36 compute-0 sudo[139979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:36 compute-0 python3.9[139981]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760157275.099871-34-29340565180354/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=7de10bc8d1f738e1402f9fea5caa06ca86e8a39c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:34:36 compute-0 sudo[139979]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:36 compute-0 sshd-session[139279]: Connection closed by 192.168.122.30 port 39418
Oct 11 04:34:36 compute-0 sshd-session[139276]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:34:36 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Oct 11 04:34:36 compute-0 systemd[1]: session-46.scope: Consumed 3.408s CPU time.
Oct 11 04:34:36 compute-0 systemd-logind[801]: Session 46 logged out. Waiting for processes to exit.
Oct 11 04:34:36 compute-0 systemd-logind[801]: Removed session 46.
Oct 11 04:34:37 compute-0 ceph-mon[74243]: pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:39 compute-0 ceph-mon[74243]: pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:41 compute-0 ceph-mon[74243]: pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:42 compute-0 sshd-session[140006]: Accepted publickey for zuul from 192.168.122.30 port 56950 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:34:42 compute-0 systemd-logind[801]: New session 47 of user zuul.
Oct 11 04:34:42 compute-0 systemd[1]: Started Session 47 of User zuul.
Oct 11 04:34:42 compute-0 sshd-session[140006]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:34:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:43 compute-0 ceph-mon[74243]: pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:43 compute-0 python3.9[140159]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:34:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:44 compute-0 sudo[140313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqayksdrkitfzjdqrxqignlunvnjczax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157283.8325756-34-152266685536491/AnsiballZ_file.py'
Oct 11 04:34:44 compute-0 sudo[140313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:44 compute-0 python3.9[140315]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:34:44 compute-0 sudo[140313]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:34:44.873623) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157284873672, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1521, "num_deletes": 250, "total_data_size": 2314087, "memory_usage": 2353640, "flush_reason": "Manual Compaction"}
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157284887403, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1334776, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7346, "largest_seqno": 8866, "table_properties": {"data_size": 1329731, "index_size": 2249, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13544, "raw_average_key_size": 20, "raw_value_size": 1318246, "raw_average_value_size": 1952, "num_data_blocks": 107, "num_entries": 675, "num_filter_entries": 675, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760157129, "oldest_key_time": 1760157129, "file_creation_time": 1760157284, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 13836 microseconds, and 7070 cpu microseconds.
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:34:44.887458) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1334776 bytes OK
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:34:44.887486) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:34:44.889277) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:34:44.889299) EVENT_LOG_v1 {"time_micros": 1760157284889290, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:34:44.889319) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2307390, prev total WAL file size 2307390, number of live WAL files 2.
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:34:44.890607) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1303KB)], [20(7274KB)]
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157284890676, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8783877, "oldest_snapshot_seqno": -1}
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3327 keys, 6841847 bytes, temperature: kUnknown
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157284938424, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6841847, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6816461, "index_size": 15997, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 79717, "raw_average_key_size": 23, "raw_value_size": 6753168, "raw_average_value_size": 2029, "num_data_blocks": 710, "num_entries": 3327, "num_filter_entries": 3327, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156706, "oldest_key_time": 0, "file_creation_time": 1760157284, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:34:44.938730) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6841847 bytes
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:34:44.940249) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.5 rd, 142.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.1 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(11.7) write-amplify(5.1) OK, records in: 3767, records dropped: 440 output_compression: NoCompression
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:34:44.940271) EVENT_LOG_v1 {"time_micros": 1760157284940260, "job": 6, "event": "compaction_finished", "compaction_time_micros": 47873, "compaction_time_cpu_micros": 33237, "output_level": 6, "num_output_files": 1, "total_output_size": 6841847, "num_input_records": 3767, "num_output_records": 3327, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157284940635, "job": 6, "event": "table_file_deletion", "file_number": 22}
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157284942059, "job": 6, "event": "table_file_deletion", "file_number": 20}
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:34:44.890511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:34:44.942404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:34:44.942409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:34:44.942411) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:34:44.942413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:34:44 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:34:44.942415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:34:45 compute-0 sudo[140465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohxlvfabmfmjjykcjjecroewvgsdsqrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157284.8593688-34-76770411121238/AnsiballZ_file.py'
Oct 11 04:34:45 compute-0 sudo[140465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:45 compute-0 ceph-mon[74243]: pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:45 compute-0 python3.9[140467]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:34:45 compute-0 sudo[140465]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:46 compute-0 python3.9[140617]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:34:47 compute-0 sudo[140767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eypibksstwpsjrycwwssyhwqbpvetcuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157286.6707675-57-100619382228565/AnsiballZ_seboolean.py'
Oct 11 04:34:47 compute-0 sudo[140767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:47 compute-0 ceph-mon[74243]: pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:47 compute-0 python3.9[140769]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 11 04:34:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:48 compute-0 sudo[140767]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:48 compute-0 sudo[140801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:34:48 compute-0 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct 11 04:34:48 compute-0 sudo[140801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:34:48 compute-0 sudo[140801]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:49 compute-0 sudo[140849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:34:49 compute-0 sudo[140849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:34:49 compute-0 sudo[140849]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:49 compute-0 sudo[140903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:34:49 compute-0 sudo[140903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:34:49 compute-0 sudo[140903]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:49 compute-0 sudo[140951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:34:49 compute-0 sudo[140951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:34:49 compute-0 sudo[141026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwtjnuudjfoaqnsnlklwdihvvjthpewj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157288.980871-67-56431323021330/AnsiballZ_setup.py'
Oct 11 04:34:49 compute-0 sudo[141026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:49 compute-0 ceph-mon[74243]: pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:49 compute-0 python3.9[141028]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:34:49 compute-0 sudo[140951]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:34:49 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:34:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:34:49 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:34:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:34:49 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:34:49 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 80d38041-632c-44f0-aa6f-132b43e2e600 does not exist
Oct 11 04:34:49 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev a7f2d8ec-56d7-4bd9-ac09-3f8bd0a32964 does not exist
Oct 11 04:34:49 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 6b8f8495-4291-4435-ac86-3477bc2bea26 does not exist
Oct 11 04:34:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:34:49 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:34:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:34:49 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:34:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:34:49 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:34:49 compute-0 sudo[141066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:34:49 compute-0 sudo[141066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:34:49 compute-0 sudo[141066]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:49 compute-0 sudo[141026]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:49 compute-0 sudo[141091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:34:49 compute-0 sudo[141091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:34:49 compute-0 sudo[141091]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:49 compute-0 sudo[141116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:34:49 compute-0 sudo[141116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:34:49 compute-0 sudo[141116]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:50 compute-0 sudo[141141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:34:50 compute-0 sudo[141141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:34:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:50 compute-0 sudo[141266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bedezhmabemjgcfyexxqjmhyxnaymhiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157288.980871-67-56431323021330/AnsiballZ_dnf.py'
Oct 11 04:34:50 compute-0 sudo[141266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:50 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:34:50 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:34:50 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:34:50 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:34:50 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:34:50 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:34:50 compute-0 podman[141283]: 2025-10-11 04:34:50.510387612 +0000 UTC m=+0.060095926 container create f246ef7b4e689fa692a1266c0a3b46f0871aae1619c747206a58e775c821e1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nobel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:34:50 compute-0 systemd[1]: Started libpod-conmon-f246ef7b4e689fa692a1266c0a3b46f0871aae1619c747206a58e775c821e1df.scope.
Oct 11 04:34:50 compute-0 podman[141283]: 2025-10-11 04:34:50.478492369 +0000 UTC m=+0.028200703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:34:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:34:50 compute-0 podman[141283]: 2025-10-11 04:34:50.62202788 +0000 UTC m=+0.171736164 container init f246ef7b4e689fa692a1266c0a3b46f0871aae1619c747206a58e775c821e1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:34:50 compute-0 podman[141283]: 2025-10-11 04:34:50.63329004 +0000 UTC m=+0.182998314 container start f246ef7b4e689fa692a1266c0a3b46f0871aae1619c747206a58e775c821e1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nobel, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:34:50 compute-0 podman[141283]: 2025-10-11 04:34:50.636597872 +0000 UTC m=+0.186306146 container attach f246ef7b4e689fa692a1266c0a3b46f0871aae1619c747206a58e775c821e1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:34:50 compute-0 flamboyant_nobel[141299]: 167 167
Oct 11 04:34:50 compute-0 systemd[1]: libpod-f246ef7b4e689fa692a1266c0a3b46f0871aae1619c747206a58e775c821e1df.scope: Deactivated successfully.
Oct 11 04:34:50 compute-0 conmon[141299]: conmon f246ef7b4e689fa692a1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f246ef7b4e689fa692a1266c0a3b46f0871aae1619c747206a58e775c821e1df.scope/container/memory.events
Oct 11 04:34:50 compute-0 podman[141283]: 2025-10-11 04:34:50.643276378 +0000 UTC m=+0.192984662 container died f246ef7b4e689fa692a1266c0a3b46f0871aae1619c747206a58e775c821e1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:34:50 compute-0 python3.9[141275]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:34:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-48a8c922e98d6959f34e90a0f2fffb189b8a533223ce92222f8d56a59617dd55-merged.mount: Deactivated successfully.
Oct 11 04:34:50 compute-0 podman[141283]: 2025-10-11 04:34:50.715983997 +0000 UTC m=+0.265692301 container remove f246ef7b4e689fa692a1266c0a3b46f0871aae1619c747206a58e775c821e1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:34:50 compute-0 systemd[1]: libpod-conmon-f246ef7b4e689fa692a1266c0a3b46f0871aae1619c747206a58e775c821e1df.scope: Deactivated successfully.
Oct 11 04:34:50 compute-0 podman[141325]: 2025-10-11 04:34:50.955762262 +0000 UTC m=+0.071342235 container create 3dcc65db4e07ba13db2937692f627d53791d092d8ea951daddee84f37d2b3c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noether, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:34:51 compute-0 systemd[1]: Started libpod-conmon-3dcc65db4e07ba13db2937692f627d53791d092d8ea951daddee84f37d2b3c0e.scope.
Oct 11 04:34:51 compute-0 podman[141325]: 2025-10-11 04:34:50.924468353 +0000 UTC m=+0.040048376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:34:51 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:34:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/996ec83001f2c348ac466e532fb561517fcaa630557372c9f4bc42af6a853b2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/996ec83001f2c348ac466e532fb561517fcaa630557372c9f4bc42af6a853b2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/996ec83001f2c348ac466e532fb561517fcaa630557372c9f4bc42af6a853b2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/996ec83001f2c348ac466e532fb561517fcaa630557372c9f4bc42af6a853b2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/996ec83001f2c348ac466e532fb561517fcaa630557372c9f4bc42af6a853b2d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:51 compute-0 podman[141325]: 2025-10-11 04:34:51.060875486 +0000 UTC m=+0.176455509 container init 3dcc65db4e07ba13db2937692f627d53791d092d8ea951daddee84f37d2b3c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noether, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:34:51 compute-0 podman[141325]: 2025-10-11 04:34:51.068448975 +0000 UTC m=+0.184028918 container start 3dcc65db4e07ba13db2937692f627d53791d092d8ea951daddee84f37d2b3c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:34:51 compute-0 podman[141325]: 2025-10-11 04:34:51.095513698 +0000 UTC m=+0.211093661 container attach 3dcc65db4e07ba13db2937692f627d53791d092d8ea951daddee84f37d2b3c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:34:51 compute-0 ceph-mon[74243]: pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:51 compute-0 sudo[141266]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:52 compute-0 funny_noether[141343]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:34:52 compute-0 funny_noether[141343]: --> relative data size: 1.0
Oct 11 04:34:52 compute-0 funny_noether[141343]: --> All data devices are unavailable
Oct 11 04:34:52 compute-0 systemd[1]: libpod-3dcc65db4e07ba13db2937692f627d53791d092d8ea951daddee84f37d2b3c0e.scope: Deactivated successfully.
Oct 11 04:34:52 compute-0 podman[141325]: 2025-10-11 04:34:52.189891724 +0000 UTC m=+1.305471657 container died 3dcc65db4e07ba13db2937692f627d53791d092d8ea951daddee84f37d2b3c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noether, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:34:52 compute-0 systemd[1]: libpod-3dcc65db4e07ba13db2937692f627d53791d092d8ea951daddee84f37d2b3c0e.scope: Consumed 1.059s CPU time.
Oct 11 04:34:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-996ec83001f2c348ac466e532fb561517fcaa630557372c9f4bc42af6a853b2d-merged.mount: Deactivated successfully.
Oct 11 04:34:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:52 compute-0 podman[141325]: 2025-10-11 04:34:52.255377273 +0000 UTC m=+1.370957216 container remove 3dcc65db4e07ba13db2937692f627d53791d092d8ea951daddee84f37d2b3c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noether, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:34:52 compute-0 systemd[1]: libpod-conmon-3dcc65db4e07ba13db2937692f627d53791d092d8ea951daddee84f37d2b3c0e.scope: Deactivated successfully.
Oct 11 04:34:52 compute-0 sudo[141141]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:52 compute-0 sudo[141459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:34:52 compute-0 sudo[141459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:34:52 compute-0 sudo[141459]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:52 compute-0 sudo[141484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:34:52 compute-0 sudo[141484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:34:52 compute-0 sudo[141484]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:52 compute-0 sudo[141509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:34:52 compute-0 sudo[141509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:34:52 compute-0 sudo[141509]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:52 compute-0 sudo[141534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:34:52 compute-0 sudo[141534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:34:52 compute-0 sudo[141671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyrftxujncxxcaxsgwweyhecgwdmomon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157292.189257-79-141023682821321/AnsiballZ_systemd.py'
Oct 11 04:34:52 compute-0 sudo[141671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:52 compute-0 podman[141672]: 2025-10-11 04:34:52.930835328 +0000 UTC m=+0.056816575 container create 18d28d0e9dfe31c1bb1ef34746fd796272bfd44ec454f939a26b5c80e8594c40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_nightingale, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:34:52 compute-0 systemd[1]: Started libpod-conmon-18d28d0e9dfe31c1bb1ef34746fd796272bfd44ec454f939a26b5c80e8594c40.scope.
Oct 11 04:34:52 compute-0 podman[141672]: 2025-10-11 04:34:52.903739683 +0000 UTC m=+0.029720980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:34:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:34:53 compute-0 podman[141672]: 2025-10-11 04:34:53.032141718 +0000 UTC m=+0.158122975 container init 18d28d0e9dfe31c1bb1ef34746fd796272bfd44ec454f939a26b5c80e8594c40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_nightingale, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:34:53 compute-0 podman[141672]: 2025-10-11 04:34:53.040076235 +0000 UTC m=+0.166057472 container start 18d28d0e9dfe31c1bb1ef34746fd796272bfd44ec454f939a26b5c80e8594c40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:34:53 compute-0 podman[141672]: 2025-10-11 04:34:53.043804958 +0000 UTC m=+0.169786205 container attach 18d28d0e9dfe31c1bb1ef34746fd796272bfd44ec454f939a26b5c80e8594c40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 04:34:53 compute-0 clever_nightingale[141690]: 167 167
Oct 11 04:34:53 compute-0 systemd[1]: libpod-18d28d0e9dfe31c1bb1ef34746fd796272bfd44ec454f939a26b5c80e8594c40.scope: Deactivated successfully.
Oct 11 04:34:53 compute-0 podman[141672]: 2025-10-11 04:34:53.047544401 +0000 UTC m=+0.173525648 container died 18d28d0e9dfe31c1bb1ef34746fd796272bfd44ec454f939a26b5c80e8594c40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_nightingale, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:34:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b9862b5bfc4a1a92959e136806422e80b15fa50f6e6180d186c9243a20493fd-merged.mount: Deactivated successfully.
Oct 11 04:34:53 compute-0 podman[141672]: 2025-10-11 04:34:53.102011346 +0000 UTC m=+0.227992593 container remove 18d28d0e9dfe31c1bb1ef34746fd796272bfd44ec454f939a26b5c80e8594c40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:34:53 compute-0 systemd[1]: libpod-conmon-18d28d0e9dfe31c1bb1ef34746fd796272bfd44ec454f939a26b5c80e8594c40.scope: Deactivated successfully.
Oct 11 04:34:53 compute-0 python3.9[141680]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 04:34:53 compute-0 podman[141716]: 2025-10-11 04:34:53.279923902 +0000 UTC m=+0.043216526 container create b5ceaf74f68494644a16d8feac3c101df1196de7caff5db0162d1d59265a14aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 04:34:53 compute-0 systemd[1]: Started libpod-conmon-b5ceaf74f68494644a16d8feac3c101df1196de7caff5db0162d1d59265a14aa.scope.
Oct 11 04:34:53 compute-0 podman[141716]: 2025-10-11 04:34:53.255519125 +0000 UTC m=+0.018811759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:34:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d9e3129179f604818723a17a154b7df48f97ec355dcd9a319b4eaeddeadd16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d9e3129179f604818723a17a154b7df48f97ec355dcd9a319b4eaeddeadd16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d9e3129179f604818723a17a154b7df48f97ec355dcd9a319b4eaeddeadd16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d9e3129179f604818723a17a154b7df48f97ec355dcd9a319b4eaeddeadd16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:53 compute-0 podman[141716]: 2025-10-11 04:34:53.367511721 +0000 UTC m=+0.130804365 container init b5ceaf74f68494644a16d8feac3c101df1196de7caff5db0162d1d59265a14aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carson, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:34:53 compute-0 podman[141716]: 2025-10-11 04:34:53.379142981 +0000 UTC m=+0.142435595 container start b5ceaf74f68494644a16d8feac3c101df1196de7caff5db0162d1d59265a14aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carson, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:34:53 compute-0 podman[141716]: 2025-10-11 04:34:53.382911164 +0000 UTC m=+0.146203788 container attach b5ceaf74f68494644a16d8feac3c101df1196de7caff5db0162d1d59265a14aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct 11 04:34:53 compute-0 ceph-mon[74243]: pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:54 compute-0 festive_carson[141734]: {
Oct 11 04:34:54 compute-0 festive_carson[141734]:     "0": [
Oct 11 04:34:54 compute-0 festive_carson[141734]:         {
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "devices": [
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "/dev/loop3"
Oct 11 04:34:54 compute-0 festive_carson[141734]:             ],
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "lv_name": "ceph_lv0",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "lv_size": "21470642176",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "name": "ceph_lv0",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "tags": {
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.cluster_name": "ceph",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.crush_device_class": "",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.encrypted": "0",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.osd_id": "0",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.type": "block",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.vdo": "0"
Oct 11 04:34:54 compute-0 festive_carson[141734]:             },
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "type": "block",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "vg_name": "ceph_vg0"
Oct 11 04:34:54 compute-0 festive_carson[141734]:         }
Oct 11 04:34:54 compute-0 festive_carson[141734]:     ],
Oct 11 04:34:54 compute-0 festive_carson[141734]:     "1": [
Oct 11 04:34:54 compute-0 festive_carson[141734]:         {
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "devices": [
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "/dev/loop4"
Oct 11 04:34:54 compute-0 festive_carson[141734]:             ],
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "lv_name": "ceph_lv1",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "lv_size": "21470642176",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "name": "ceph_lv1",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "tags": {
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.cluster_name": "ceph",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.crush_device_class": "",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.encrypted": "0",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.osd_id": "1",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.type": "block",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.vdo": "0"
Oct 11 04:34:54 compute-0 festive_carson[141734]:             },
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "type": "block",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "vg_name": "ceph_vg1"
Oct 11 04:34:54 compute-0 festive_carson[141734]:         }
Oct 11 04:34:54 compute-0 festive_carson[141734]:     ],
Oct 11 04:34:54 compute-0 festive_carson[141734]:     "2": [
Oct 11 04:34:54 compute-0 festive_carson[141734]:         {
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "devices": [
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "/dev/loop5"
Oct 11 04:34:54 compute-0 festive_carson[141734]:             ],
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "lv_name": "ceph_lv2",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "lv_size": "21470642176",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "name": "ceph_lv2",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "tags": {
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.cluster_name": "ceph",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.crush_device_class": "",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.encrypted": "0",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.osd_id": "2",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.type": "block",
Oct 11 04:34:54 compute-0 festive_carson[141734]:                 "ceph.vdo": "0"
Oct 11 04:34:54 compute-0 festive_carson[141734]:             },
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "type": "block",
Oct 11 04:34:54 compute-0 festive_carson[141734]:             "vg_name": "ceph_vg2"
Oct 11 04:34:54 compute-0 festive_carson[141734]:         }
Oct 11 04:34:54 compute-0 festive_carson[141734]:     ]
Oct 11 04:34:54 compute-0 festive_carson[141734]: }
Oct 11 04:34:54 compute-0 systemd[1]: libpod-b5ceaf74f68494644a16d8feac3c101df1196de7caff5db0162d1d59265a14aa.scope: Deactivated successfully.
Oct 11 04:34:54 compute-0 podman[141716]: 2025-10-11 04:34:54.113798628 +0000 UTC m=+0.877091282 container died b5ceaf74f68494644a16d8feac3c101df1196de7caff5db0162d1d59265a14aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:34:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7d9e3129179f604818723a17a154b7df48f97ec355dcd9a319b4eaeddeadd16-merged.mount: Deactivated successfully.
Oct 11 04:34:54 compute-0 podman[141716]: 2025-10-11 04:34:54.185780348 +0000 UTC m=+0.949073002 container remove b5ceaf74f68494644a16d8feac3c101df1196de7caff5db0162d1d59265a14aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:34:54 compute-0 systemd[1]: libpod-conmon-b5ceaf74f68494644a16d8feac3c101df1196de7caff5db0162d1d59265a14aa.scope: Deactivated successfully.
Oct 11 04:34:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:54 compute-0 sudo[141534]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:54 compute-0 sudo[141671]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:54 compute-0 sudo[141758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:34:54 compute-0 sudo[141758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:34:54 compute-0 rsyslogd[1004]: imjournal from <np0005480869:festive_carson>: begin to drop messages due to rate-limiting
Oct 11 04:34:54 compute-0 sudo[141758]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:54 compute-0 sudo[141795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:34:54 compute-0 sudo[141795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:34:54 compute-0 sudo[141795]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:54 compute-0 sudo[141832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:34:54 compute-0 sudo[141832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:34:54 compute-0 sudo[141832]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:54 compute-0 sudo[141878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:34:54 compute-0 sudo[141878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:34:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:55 compute-0 podman[141996]: 2025-10-11 04:34:55.072016785 +0000 UTC m=+0.069847078 container create 27dff6487f1d14acf4fe3ea65cda469b61282dd1ba03ae06ef0b185d02c66c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_agnesi, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:34:55 compute-0 systemd[1]: Started libpod-conmon-27dff6487f1d14acf4fe3ea65cda469b61282dd1ba03ae06ef0b185d02c66c51.scope.
Oct 11 04:34:55 compute-0 podman[141996]: 2025-10-11 04:34:55.041295571 +0000 UTC m=+0.039125914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:34:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:34:55 compute-0 podman[141996]: 2025-10-11 04:34:55.182004462 +0000 UTC m=+0.179834815 container init 27dff6487f1d14acf4fe3ea65cda469b61282dd1ba03ae06ef0b185d02c66c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_agnesi, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:34:55 compute-0 podman[141996]: 2025-10-11 04:34:55.194040771 +0000 UTC m=+0.191871074 container start 27dff6487f1d14acf4fe3ea65cda469b61282dd1ba03ae06ef0b185d02c66c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_agnesi, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:34:55 compute-0 podman[141996]: 2025-10-11 04:34:55.198263776 +0000 UTC m=+0.196094229 container attach 27dff6487f1d14acf4fe3ea65cda469b61282dd1ba03ae06ef0b185d02c66c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:34:55 compute-0 romantic_agnesi[142036]: 167 167
Oct 11 04:34:55 compute-0 systemd[1]: libpod-27dff6487f1d14acf4fe3ea65cda469b61282dd1ba03ae06ef0b185d02c66c51.scope: Deactivated successfully.
Oct 11 04:34:55 compute-0 podman[141996]: 2025-10-11 04:34:55.203147848 +0000 UTC m=+0.200978151 container died 27dff6487f1d14acf4fe3ea65cda469b61282dd1ba03ae06ef0b185d02c66c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 04:34:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b03198e8aa69cdd98ecefe0cf23040f90325a71efe369e40106a81ae358143f-merged.mount: Deactivated successfully.
Oct 11 04:34:55 compute-0 sudo[142068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cketsclzxpjeluluujoepnmvlbzfssnr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760157294.5719662-87-32221890374101/AnsiballZ_edpm_nftables_snippet.py'
Oct 11 04:34:55 compute-0 sudo[142068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:55 compute-0 podman[141996]: 2025-10-11 04:34:55.251318046 +0000 UTC m=+0.249148349 container remove 27dff6487f1d14acf4fe3ea65cda469b61282dd1ba03ae06ef0b185d02c66c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:34:55 compute-0 systemd[1]: libpod-conmon-27dff6487f1d14acf4fe3ea65cda469b61282dd1ba03ae06ef0b185d02c66c51.scope: Deactivated successfully.
Oct 11 04:34:55 compute-0 ceph-mon[74243]: pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:55 compute-0 podman[142088]: 2025-10-11 04:34:55.458987942 +0000 UTC m=+0.065183112 container create 337a34440b5a265fc9e38724d3fc942e9d880ea15d09a63bde91353f7e24916d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hellman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:34:55 compute-0 python3[142076]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 11 04:34:55 compute-0 systemd[1]: Started libpod-conmon-337a34440b5a265fc9e38724d3fc942e9d880ea15d09a63bde91353f7e24916d.scope.
Oct 11 04:34:55 compute-0 sudo[142068]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:55 compute-0 podman[142088]: 2025-10-11 04:34:55.432823211 +0000 UTC m=+0.039018461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:34:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d91d887ff19c04978e6f978db82154674afa10e77b9d6d24a8d22da22b986b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d91d887ff19c04978e6f978db82154674afa10e77b9d6d24a8d22da22b986b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d91d887ff19c04978e6f978db82154674afa10e77b9d6d24a8d22da22b986b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d91d887ff19c04978e6f978db82154674afa10e77b9d6d24a8d22da22b986b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:55 compute-0 podman[142088]: 2025-10-11 04:34:55.552924109 +0000 UTC m=+0.159119289 container init 337a34440b5a265fc9e38724d3fc942e9d880ea15d09a63bde91353f7e24916d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hellman, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:34:55 compute-0 podman[142088]: 2025-10-11 04:34:55.559848162 +0000 UTC m=+0.166043332 container start 337a34440b5a265fc9e38724d3fc942e9d880ea15d09a63bde91353f7e24916d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hellman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:34:55 compute-0 podman[142088]: 2025-10-11 04:34:55.563209575 +0000 UTC m=+0.169404785 container attach 337a34440b5a265fc9e38724d3fc942e9d880ea15d09a63bde91353f7e24916d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hellman, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:34:56
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'images', '.rgw.root', 'volumes', '.mgr', 'backups']
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:34:56 compute-0 sudo[142259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfdijfsickqalcbvjramdsyieolkefzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157295.7791898-96-238505509649313/AnsiballZ_file.py'
Oct 11 04:34:56 compute-0 sudo[142259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:56 compute-0 python3.9[142261]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:34:56 compute-0 sudo[142259]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:56 compute-0 stoic_hellman[142105]: {
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:         "osd_id": 1,
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:         "type": "bluestore"
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:     },
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:         "osd_id": 0,
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:         "type": "bluestore"
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:     },
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:         "osd_id": 2,
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:         "type": "bluestore"
Oct 11 04:34:56 compute-0 stoic_hellman[142105]:     }
Oct 11 04:34:56 compute-0 stoic_hellman[142105]: }
Oct 11 04:34:56 compute-0 systemd[1]: libpod-337a34440b5a265fc9e38724d3fc942e9d880ea15d09a63bde91353f7e24916d.scope: Deactivated successfully.
Oct 11 04:34:56 compute-0 systemd[1]: libpod-337a34440b5a265fc9e38724d3fc942e9d880ea15d09a63bde91353f7e24916d.scope: Consumed 1.013s CPU time.
Oct 11 04:34:56 compute-0 podman[142088]: 2025-10-11 04:34:56.580047702 +0000 UTC m=+1.186242902 container died 337a34440b5a265fc9e38724d3fc942e9d880ea15d09a63bde91353f7e24916d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hellman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:34:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-24d91d887ff19c04978e6f978db82154674afa10e77b9d6d24a8d22da22b986b-merged.mount: Deactivated successfully.
Oct 11 04:34:56 compute-0 podman[142088]: 2025-10-11 04:34:56.655594692 +0000 UTC m=+1.261789852 container remove 337a34440b5a265fc9e38724d3fc942e9d880ea15d09a63bde91353f7e24916d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hellman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:34:56 compute-0 systemd[1]: libpod-conmon-337a34440b5a265fc9e38724d3fc942e9d880ea15d09a63bde91353f7e24916d.scope: Deactivated successfully.
Oct 11 04:34:56 compute-0 sudo[141878]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:34:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:34:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:34:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 7ecf9046-7084-4a2a-bee0-72fad956e805 does not exist
Oct 11 04:34:56 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev a51a1cf4-ee98-4a62-8580-703cbad58193 does not exist
Oct 11 04:34:56 compute-0 sudo[142381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:34:56 compute-0 sudo[142381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:34:56 compute-0 sudo[142381]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:56 compute-0 sudo[142406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:34:56 compute-0 sudo[142406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:34:56 compute-0 sudo[142406]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:57 compute-0 sudo[142504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teizctgzwgkpzfhuheouurowivikyqrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157296.5029469-104-66713627237628/AnsiballZ_stat.py'
Oct 11 04:34:57 compute-0 sudo[142504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:57 compute-0 python3.9[142506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:34:57 compute-0 sudo[142504]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:57 compute-0 ceph-mon[74243]: pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:34:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:34:57 compute-0 sudo[142582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwwvqejehwwulfhbancapwbfbiwzqkfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157296.5029469-104-66713627237628/AnsiballZ_file.py'
Oct 11 04:34:57 compute-0 sudo[142582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:57 compute-0 python3.9[142584]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:34:57 compute-0 sudo[142582]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:58 compute-0 sudo[142734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ityvtxczyargqxednhobmpsynouqvwgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157298.0132504-116-94618856083527/AnsiballZ_stat.py'
Oct 11 04:34:58 compute-0 sudo[142734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:58 compute-0 python3.9[142736]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:34:58 compute-0 sudo[142734]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:58 compute-0 sudo[142812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgmapabmlrqjunynhbldgomgvzprkixe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157298.0132504-116-94618856083527/AnsiballZ_file.py'
Oct 11 04:34:58 compute-0 sudo[142812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:59 compute-0 python3.9[142814]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2krvtt19 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:34:59 compute-0 sudo[142812]: pam_unix(sudo:session): session closed for user root
Oct 11 04:34:59 compute-0 ceph-mon[74243]: pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:59 compute-0 sudo[142964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rondktxlsfotaofqixoruykjepxwyvkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157299.256022-128-68315391419184/AnsiballZ_stat.py'
Oct 11 04:34:59 compute-0 sudo[142964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:34:59 compute-0 python3.9[142966]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:34:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:59 compute-0 sudo[142964]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:00 compute-0 sudo[143042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghbrexuczercgvdmovzineipasefhuqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157299.256022-128-68315391419184/AnsiballZ_file.py'
Oct 11 04:35:00 compute-0 sudo[143042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:00 compute-0 python3.9[143044]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:35:00 compute-0 sudo[143042]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:01 compute-0 sudo[143194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjkrlmclhqlzwpfskkvjdinkmkaswzkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157300.6671755-141-134395740919649/AnsiballZ_command.py'
Oct 11 04:35:01 compute-0 sudo[143194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:01 compute-0 python3.9[143196]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:35:01 compute-0 ceph-mon[74243]: pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:01 compute-0 sudo[143194]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:02 compute-0 sudo[143347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gscxhkbjsdocblbatafqzlqwlbmjxhqz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760157301.7395225-149-118482146693877/AnsiballZ_edpm_nftables_from_files.py'
Oct 11 04:35:02 compute-0 sudo[143347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:02 compute-0 python3[143349]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 11 04:35:02 compute-0 sudo[143347]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:03 compute-0 sudo[143499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kepyfdhaipgfapntxhltdbroswybbicm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157302.7817957-157-276600734498522/AnsiballZ_stat.py'
Oct 11 04:35:03 compute-0 sudo[143499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:03 compute-0 python3.9[143501]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:35:03 compute-0 sudo[143499]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:03 compute-0 ceph-mon[74243]: pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:04 compute-0 sudo[143624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahmbeqwjeiccjstmvdnzcjjbaqlvofsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157302.7817957-157-276600734498522/AnsiballZ_copy.py'
Oct 11 04:35:04 compute-0 sudo[143624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:04 compute-0 python3.9[143626]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157302.7817957-157-276600734498522/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:35:04 compute-0 sudo[143624]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:04 compute-0 ceph-mon[74243]: pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:04 compute-0 sudo[143776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqmuhiqeynxiaqmntepmhvuxughcnfvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157304.5049896-172-157501380507554/AnsiballZ_stat.py'
Oct 11 04:35:04 compute-0 sudo[143776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:05 compute-0 python3.9[143778]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:35:05 compute-0 sudo[143776]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:05 compute-0 sudo[143901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owzhldnpqsxeqzsjwrtlqpkqmltjkpye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157304.5049896-172-157501380507554/AnsiballZ_copy.py'
Oct 11 04:35:05 compute-0 sudo[143901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:05 compute-0 python3.9[143903]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157304.5049896-172-157501380507554/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:35:05 compute-0 sudo[143901]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:35:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:35:05 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:35:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:35:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:35:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:35:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:35:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:35:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:35:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:35:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:35:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:35:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:06 compute-0 sudo[144053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqnjdsumeoxvevdckamhwaxmiopyagwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157305.9979868-187-184918781348110/AnsiballZ_stat.py'
Oct 11 04:35:06 compute-0 sudo[144053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:06 compute-0 python3.9[144055]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:35:06 compute-0 sudo[144053]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:06 compute-0 sudo[144178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhwwmxhvujxrxuivdojyhnmrhgbhvkrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157305.9979868-187-184918781348110/AnsiballZ_copy.py'
Oct 11 04:35:06 compute-0 sudo[144178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:07 compute-0 python3.9[144180]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157305.9979868-187-184918781348110/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:35:07 compute-0 sudo[144178]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:07 compute-0 ceph-mon[74243]: pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:07 compute-0 sudo[144330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zssrmwvsbwywdtxmiudigrwpizsewoss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157307.4337373-202-147696797594592/AnsiballZ_stat.py'
Oct 11 04:35:07 compute-0 sudo[144330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:07 compute-0 python3.9[144332]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:35:07 compute-0 sudo[144330]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:08 compute-0 sudo[144455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlgcrskmwnoxbpvysnbgeweepdyjuwrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157307.4337373-202-147696797594592/AnsiballZ_copy.py'
Oct 11 04:35:08 compute-0 sudo[144455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:08 compute-0 python3.9[144457]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157307.4337373-202-147696797594592/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:35:08 compute-0 sudo[144455]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:09 compute-0 ceph-mon[74243]: pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:09 compute-0 sudo[144607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcjiafkzisdrfgevksxdwyxroxqvzjgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157308.8400192-217-198719614344448/AnsiballZ_stat.py'
Oct 11 04:35:09 compute-0 sudo[144607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:09 compute-0 python3.9[144609]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:35:09 compute-0 sudo[144607]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:09 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:35:09 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2015 writes, 8947 keys, 2015 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2015 writes, 2015 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2015 writes, 8947 keys, 2015 commit groups, 1.0 writes per commit group, ingest: 11.38 MB, 0.02 MB/s
                                           Interval WAL: 2015 writes, 2015 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    127.5      0.07              0.03         3    0.022       0      0       0.0       0.0
                                             L6      1/0    6.52 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    175.7    154.3      0.09              0.05         2    0.044    7148    729       0.0       0.0
                                            Sum      1/0    6.52 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    100.6    142.8      0.15              0.08         5    0.031    7148    729       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    103.4    146.6      0.15              0.08         4    0.038    7148    729       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    175.7    154.3      0.09              0.05         2    0.044    7148    729       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    135.6      0.06              0.03         2    0.031       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.008, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563d484a31f0#2 capacity: 308.00 MB usage: 584.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(36,497.47 KB,0.15773%) FilterBlock(6,28.30 KB,0.00897197%) IndexBlock(6,58.91 KB,0.0186772%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 04:35:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:10 compute-0 sudo[144732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aujdtgasuosqbvfsimuvenntmzhoeczr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157308.8400192-217-198719614344448/AnsiballZ_copy.py'
Oct 11 04:35:10 compute-0 sudo[144732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:10 compute-0 python3.9[144734]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157308.8400192-217-198719614344448/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:35:10 compute-0 sudo[144732]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:10 compute-0 sudo[144884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eceoikhnzstvnihkqxnsuufklavizhna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157310.534229-232-90017095570860/AnsiballZ_file.py'
Oct 11 04:35:10 compute-0 sudo[144884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:10 compute-0 python3.9[144886]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:35:11 compute-0 sudo[144884]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:11 compute-0 ceph-mon[74243]: pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:11 compute-0 sudo[145036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfztmvhnfwuogqddxjxfvtwcsdvadvpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157311.1728566-240-93684464020479/AnsiballZ_command.py'
Oct 11 04:35:11 compute-0 sudo[145036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:11 compute-0 python3.9[145038]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:35:11 compute-0 sudo[145036]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:12 compute-0 sudo[145191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruacnyslrxkpjgvqdahrkjvahpzuikeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157311.8939345-248-81498985546408/AnsiballZ_blockinfile.py'
Oct 11 04:35:12 compute-0 sudo[145191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:12 compute-0 python3.9[145193]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:35:12 compute-0 sudo[145191]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:13 compute-0 sudo[145343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugpjgfbdmbjshpemvvewozyxptifksjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157312.9123292-257-37320792936815/AnsiballZ_command.py'
Oct 11 04:35:13 compute-0 sudo[145343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:13 compute-0 ceph-mon[74243]: pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:13 compute-0 python3.9[145345]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:35:13 compute-0 sudo[145343]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:14 compute-0 sudo[145496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odfhmtrtzwxqmappuvimgtgvflvkwvfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157313.7883732-265-219006690531362/AnsiballZ_stat.py'
Oct 11 04:35:14 compute-0 sudo[145496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:14 compute-0 python3.9[145498]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:35:14 compute-0 sudo[145496]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:14 compute-0 sudo[145650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgwwpeagqqsszzqdbclfhmqywpxibnlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157314.5951548-273-261255167189397/AnsiballZ_command.py'
Oct 11 04:35:14 compute-0 sudo[145650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:15 compute-0 python3.9[145652]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:35:15 compute-0 sudo[145650]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:15 compute-0 ceph-mon[74243]: pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:15 compute-0 sudo[145805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjwovmutbzeecojkuxqkgmssqbiokava ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157315.3905542-281-101113288153107/AnsiballZ_file.py'
Oct 11 04:35:15 compute-0 sudo[145805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:16 compute-0 python3.9[145807]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:35:16 compute-0 sudo[145805]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:17 compute-0 ceph-mon[74243]: pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:17 compute-0 python3.9[145957]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:35:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:18 compute-0 sudo[146108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whkwmgcuqrjhdwffylhvrklqbmkmolus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157317.989359-321-184866212080762/AnsiballZ_command.py'
Oct 11 04:35:18 compute-0 sudo[146108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:18 compute-0 python3.9[146110]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:c0:16:5a:16" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:35:18 compute-0 ovs-vsctl[146111]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:c0:16:5a:16 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 11 04:35:18 compute-0 sudo[146108]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:19 compute-0 sudo[146261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpbssntvvnjgucjnunqaaoetjuzrtqwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157318.8102605-330-127364714958164/AnsiballZ_command.py'
Oct 11 04:35:19 compute-0 sudo[146261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:19 compute-0 ceph-mon[74243]: pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:19 compute-0 python3.9[146263]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:35:19 compute-0 sudo[146261]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:19 compute-0 sudo[146416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzsjevfjhmwsuvmadlghpxdqxrdwbbyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157319.592511-338-231744326881920/AnsiballZ_command.py'
Oct 11 04:35:19 compute-0 sudo[146416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:20 compute-0 python3.9[146418]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:35:20 compute-0 ovs-vsctl[146419]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 11 04:35:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:20 compute-0 sudo[146416]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:21 compute-0 python3.9[146569]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:35:21 compute-0 ceph-mon[74243]: pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:21 compute-0 sudo[146721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhjydummmelmmmsmvwoobqqxjuwdwbye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157321.5418296-355-174879547064117/AnsiballZ_file.py'
Oct 11 04:35:21 compute-0 sudo[146721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:22 compute-0 python3.9[146723]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:35:22 compute-0 sudo[146721]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:22 compute-0 sudo[146873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpbktbrrkhwqhbfxlowubfhllakvwvin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157322.3877838-363-133555087246182/AnsiballZ_stat.py'
Oct 11 04:35:22 compute-0 sudo[146873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:22 compute-0 python3.9[146875]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:35:22 compute-0 sudo[146873]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:23 compute-0 sudo[146951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oavqsblxdjwyaqtkscxsuhzhsryapipi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157322.3877838-363-133555087246182/AnsiballZ_file.py'
Oct 11 04:35:23 compute-0 sudo[146951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:23 compute-0 ceph-mon[74243]: pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:23 compute-0 python3.9[146953]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:35:23 compute-0 sudo[146951]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:24 compute-0 sudo[147103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhwauiypnuxljdivqjzawduuudaxmkpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157323.692699-363-35524307519783/AnsiballZ_stat.py'
Oct 11 04:35:24 compute-0 sudo[147103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:24 compute-0 python3.9[147105]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:35:24 compute-0 sudo[147103]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:24 compute-0 sudo[147181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwubwhvzsyayspctcghjshveldvlqwfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157323.692699-363-35524307519783/AnsiballZ_file.py'
Oct 11 04:35:24 compute-0 sudo[147181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:24 compute-0 python3.9[147183]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:35:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:24 compute-0 sudo[147181]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:25 compute-0 ceph-mon[74243]: pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:25 compute-0 sudo[147333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slamfsisuqmlophzzgcceswshfgqwvme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157325.0561955-386-188386734523728/AnsiballZ_file.py'
Oct 11 04:35:25 compute-0 sudo[147333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:25 compute-0 python3.9[147335]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:35:25 compute-0 sudo[147333]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:35:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:35:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:35:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:35:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:35:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:35:26 compute-0 sudo[147485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cungwjsxszlxtgrniyrqmgaslsirfktm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157325.8593576-394-58756735429548/AnsiballZ_stat.py'
Oct 11 04:35:26 compute-0 sudo[147485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:26 compute-0 python3.9[147487]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:35:26 compute-0 sudo[147485]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:26 compute-0 sudo[147563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aprgkgwlzjczlwyetvbjfstmfiyzgnze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157325.8593576-394-58756735429548/AnsiballZ_file.py'
Oct 11 04:35:26 compute-0 sudo[147563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:27 compute-0 python3.9[147565]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:35:27 compute-0 sudo[147563]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:27 compute-0 ceph-mon[74243]: pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:27 compute-0 sudo[147715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mupewloefkxyamppofrdxcvwmcykwial ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157327.2074277-406-127230062707403/AnsiballZ_stat.py'
Oct 11 04:35:27 compute-0 sudo[147715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:27 compute-0 python3.9[147717]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:35:27 compute-0 sudo[147715]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:28 compute-0 sudo[147793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdyetgsvmbkxnwfvgbfvfstdbyibeuwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157327.2074277-406-127230062707403/AnsiballZ_file.py'
Oct 11 04:35:28 compute-0 sudo[147793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:28 compute-0 python3.9[147795]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:35:28 compute-0 sudo[147793]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:29 compute-0 sudo[147945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tophxumvcnvfvjayghzfuchxvqnqahuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157328.6858058-418-63666111319349/AnsiballZ_systemd.py'
Oct 11 04:35:29 compute-0 sudo[147945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:29 compute-0 ceph-mon[74243]: pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:29 compute-0 python3.9[147947]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:35:29 compute-0 systemd[1]: Reloading.
Oct 11 04:35:29 compute-0 systemd-sysv-generator[147979]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:35:29 compute-0 systemd-rc-local-generator[147974]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:35:29 compute-0 sudo[147945]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:30 compute-0 sudo[148134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vozvkosaagfgyizribbuzsnlrciszish ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157330.0785992-426-114629861772988/AnsiballZ_stat.py'
Oct 11 04:35:30 compute-0 sudo[148134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:30 compute-0 python3.9[148136]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:35:30 compute-0 sudo[148134]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:31 compute-0 sudo[148212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbpzfhmczpnrzcjjytylqhsxcmpqdfsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157330.0785992-426-114629861772988/AnsiballZ_file.py'
Oct 11 04:35:31 compute-0 sudo[148212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:31 compute-0 python3.9[148214]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:35:31 compute-0 sudo[148212]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:31 compute-0 ceph-mon[74243]: pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:31 compute-0 sudo[148364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ualjwingpfwivgiouhdoukenglmsrgoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157331.56388-438-66631428262775/AnsiballZ_stat.py'
Oct 11 04:35:31 compute-0 sudo[148364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:32 compute-0 python3.9[148366]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:35:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:32 compute-0 sudo[148364]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:32 compute-0 sudo[148442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbjfivkfwwnrmjrrglkwivgzynqeltto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157331.56388-438-66631428262775/AnsiballZ_file.py'
Oct 11 04:35:32 compute-0 sudo[148442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:32 compute-0 python3.9[148444]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:35:32 compute-0 sudo[148442]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:33 compute-0 sudo[148594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmdwbfvwbdfujbetlhketpcbkrpewmfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157332.9759288-450-224378564570437/AnsiballZ_systemd.py'
Oct 11 04:35:33 compute-0 sudo[148594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:33 compute-0 ceph-mon[74243]: pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:33 compute-0 python3.9[148596]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:35:33 compute-0 systemd[1]: Reloading.
Oct 11 04:35:33 compute-0 systemd-rc-local-generator[148625]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:35:33 compute-0 systemd-sysv-generator[148630]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:35:34 compute-0 systemd[1]: Starting Create netns directory...
Oct 11 04:35:34 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 04:35:34 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 04:35:34 compute-0 systemd[1]: Finished Create netns directory.
Oct 11 04:35:34 compute-0 sudo[148594]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:34 compute-0 sudo[148788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvgqnzdsnsiaaxqsdnopukjsvsqklpxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157334.3765776-460-123453549909212/AnsiballZ_file.py'
Oct 11 04:35:34 compute-0 sudo[148788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:34 compute-0 python3.9[148790]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:35:35 compute-0 sudo[148788]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:35 compute-0 ceph-mon[74243]: pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:35 compute-0 sudo[148940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykqudstdrhpcwjzdkkwwcqbqnuycpcha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157335.2396295-468-130910930714545/AnsiballZ_stat.py'
Oct 11 04:35:35 compute-0 sudo[148940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:35 compute-0 python3.9[148942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:35:35 compute-0 sudo[148940]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:36 compute-0 sudo[149063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etnpdgihwdivxrdezpbfuaxchrxekosl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157335.2396295-468-130910930714545/AnsiballZ_copy.py'
Oct 11 04:35:36 compute-0 sudo[149063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:36 compute-0 python3.9[149065]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760157335.2396295-468-130910930714545/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:35:36 compute-0 sudo[149063]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:37 compute-0 sudo[149215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgrtsxcjhdsnnpdclgggyrblmbiypnuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157336.8995814-485-169657204876129/AnsiballZ_file.py'
Oct 11 04:35:37 compute-0 sudo[149215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:37 compute-0 ceph-mon[74243]: pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:37 compute-0 python3.9[149217]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:35:37 compute-0 sudo[149215]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:38 compute-0 sudo[149367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvvmmfwaifmwcbofajqgrlzamengxkwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157337.745861-493-53599775853204/AnsiballZ_stat.py'
Oct 11 04:35:38 compute-0 sudo[149367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:38 compute-0 python3.9[149369]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:35:38 compute-0 sudo[149367]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:38 compute-0 sudo[149490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mksnjermofznfjimlqzrjwxruuebeqnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157337.745861-493-53599775853204/AnsiballZ_copy.py'
Oct 11 04:35:38 compute-0 sudo[149490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:39 compute-0 python3.9[149492]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760157337.745861-493-53599775853204/.source.json _original_basename=.xcio_6c8 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:35:39 compute-0 sudo[149490]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:39 compute-0 ceph-mon[74243]: pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:39 compute-0 sudo[149642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxocknejongimkqsxdvmdchejycxqbja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157339.2391267-508-55439356775522/AnsiballZ_file.py'
Oct 11 04:35:39 compute-0 sudo[149642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:39 compute-0 python3.9[149644]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:35:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:39 compute-0 sudo[149642]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:40 compute-0 sudo[149794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojchaclfpeqbqflazsdflkurlqzjkctt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157340.2006598-516-8471588162389/AnsiballZ_stat.py'
Oct 11 04:35:40 compute-0 sudo[149794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:40 compute-0 sudo[149794]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:41 compute-0 sudo[149917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvadvffdcmluyscwzazchhbwyvbmtazc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157340.2006598-516-8471588162389/AnsiballZ_copy.py'
Oct 11 04:35:41 compute-0 sudo[149917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:41 compute-0 ceph-mon[74243]: pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:41 compute-0 sudo[149917]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:42 compute-0 sudo[150069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cupzbykwgqijxzdbjnrjzjqiehahxgqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157341.7797914-533-218521785458895/AnsiballZ_container_config_data.py'
Oct 11 04:35:42 compute-0 sudo[150069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:42 compute-0 python3.9[150071]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 11 04:35:42 compute-0 sudo[150069]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:43 compute-0 sudo[150221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdpeumycpgmodfxrenpiwmbqzwawlznw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157342.8905423-542-150496959548114/AnsiballZ_container_config_hash.py'
Oct 11 04:35:43 compute-0 sudo[150221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:43 compute-0 ceph-mon[74243]: pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:43 compute-0 python3.9[150223]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 04:35:43 compute-0 sudo[150221]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:44 compute-0 sudo[150373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feoqznfktdikctiljoufxaylfjoaeobb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157343.9199753-551-4599770936906/AnsiballZ_podman_container_info.py'
Oct 11 04:35:44 compute-0 sudo[150373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:44 compute-0 python3.9[150375]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 11 04:35:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:44 compute-0 sudo[150373]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:45 compute-0 ceph-mon[74243]: pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:46 compute-0 sudo[150552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkdkrhpugygrslaplkfdtaunokajaovc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760157345.6399055-564-2422169663971/AnsiballZ_edpm_container_manage.py'
Oct 11 04:35:46 compute-0 sudo[150552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:46 compute-0 python3[150554]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 04:35:46 compute-0 ceph-mon[74243]: pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:49 compute-0 ceph-mon[74243]: pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:51 compute-0 ceph-mon[74243]: pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:51 compute-0 podman[150567]: 2025-10-11 04:35:51.389592591 +0000 UTC m=+4.827814325 image pull 3b86aea1acd0e80af91d8a3efa79cc99f54489e3c22377193c4282a256797350 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 11 04:35:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:52 compute-0 podman[150688]: 2025-10-11 04:35:52.621445022 +0000 UTC m=+0.089855901 container create 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:35:52 compute-0 podman[150688]: 2025-10-11 04:35:52.581985117 +0000 UTC m=+0.050396056 image pull 3b86aea1acd0e80af91d8a3efa79cc99f54489e3c22377193c4282a256797350 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 11 04:35:52 compute-0 python3[150554]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 11 04:35:52 compute-0 sudo[150552]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:53 compute-0 ceph-mon[74243]: pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:53 compute-0 sudo[150876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqrvqyjuwtydfmttgkujjbzusqaafzgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157353.0371358-572-208040414838408/AnsiballZ_stat.py'
Oct 11 04:35:53 compute-0 sudo[150876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:53 compute-0 python3.9[150878]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:35:53 compute-0 sudo[150876]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:54 compute-0 sudo[151030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmoctcnlegozbmghnmhlxbqywujyohck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157354.0065157-581-47033170554227/AnsiballZ_file.py'
Oct 11 04:35:54 compute-0 sudo[151030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:54 compute-0 python3.9[151032]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:35:54 compute-0 sudo[151030]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:54 compute-0 sudo[151106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkmnbtoizmfekupeftdnzncizanmmizy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157354.0065157-581-47033170554227/AnsiballZ_stat.py'
Oct 11 04:35:54 compute-0 sudo[151106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:55 compute-0 python3.9[151108]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:35:55 compute-0 rsyslogd[1004]: imjournal: 525 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 11 04:35:55 compute-0 sudo[151106]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:55 compute-0 ceph-mon[74243]: pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:35:55.425452) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157355425500, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 790, "num_deletes": 251, "total_data_size": 1038749, "memory_usage": 1053008, "flush_reason": "Manual Compaction"}
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157355436838, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1029350, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8867, "largest_seqno": 9656, "table_properties": {"data_size": 1025356, "index_size": 1774, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8535, "raw_average_key_size": 18, "raw_value_size": 1017305, "raw_average_value_size": 2216, "num_data_blocks": 82, "num_entries": 459, "num_filter_entries": 459, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760157285, "oldest_key_time": 1760157285, "file_creation_time": 1760157355, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 11521 microseconds, and 5764 cpu microseconds.
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:35:55.436946) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1029350 bytes OK
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:35:55.437006) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:35:55.439041) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:35:55.439062) EVENT_LOG_v1 {"time_micros": 1760157355439055, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:35:55.439084) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1034792, prev total WAL file size 1034792, number of live WAL files 2.
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:35:55.439903) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1005KB)], [23(6681KB)]
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157355439956, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7871197, "oldest_snapshot_seqno": -1}
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3272 keys, 6078695 bytes, temperature: kUnknown
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157355478758, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6078695, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6054989, "index_size": 14445, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8197, "raw_key_size": 79327, "raw_average_key_size": 24, "raw_value_size": 5993966, "raw_average_value_size": 1831, "num_data_blocks": 630, "num_entries": 3272, "num_filter_entries": 3272, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156706, "oldest_key_time": 0, "file_creation_time": 1760157355, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:35:55.479104) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6078695 bytes
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:35:55.481532) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.3 rd, 156.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 6.5 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(13.6) write-amplify(5.9) OK, records in: 3786, records dropped: 514 output_compression: NoCompression
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:35:55.481563) EVENT_LOG_v1 {"time_micros": 1760157355481548, "job": 8, "event": "compaction_finished", "compaction_time_micros": 38910, "compaction_time_cpu_micros": 21803, "output_level": 6, "num_output_files": 1, "total_output_size": 6078695, "num_input_records": 3786, "num_output_records": 3272, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157355482044, "job": 8, "event": "table_file_deletion", "file_number": 25}
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157355484579, "job": 8, "event": "table_file_deletion", "file_number": 23}
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:35:55.439772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:35:55.484657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:35:55.484666) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:35:55.484669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:35:55.484672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:35:55 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:35:55.484675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:35:55 compute-0 sudo[151257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htwijqnpccjgipgeovqiqogahynihfio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157355.3362515-581-6683148340694/AnsiballZ_copy.py'
Oct 11 04:35:55 compute-0 sudo[151257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:35:56
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'images', '.rgw.root', 'default.rgw.log', 'default.rgw.meta']
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:35:56 compute-0 python3.9[151259]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760157355.3362515-581-6683148340694/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:35:56 compute-0 sudo[151257]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:35:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:56 compute-0 sudo[151333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rttcqsftrofkkcqljtbhfrmwvxoaizty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157355.3362515-581-6683148340694/AnsiballZ_systemd.py'
Oct 11 04:35:56 compute-0 sudo[151333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:56 compute-0 python3.9[151335]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 04:35:56 compute-0 systemd[1]: Reloading.
Oct 11 04:35:56 compute-0 systemd-rc-local-generator[151374]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:35:56 compute-0 systemd-sysv-generator[151379]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:35:57 compute-0 sudo[151337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:35:57 compute-0 sudo[151337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:35:57 compute-0 sudo[151337]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:57 compute-0 sudo[151333]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:57 compute-0 sudo[151395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:35:57 compute-0 sudo[151395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:35:57 compute-0 sudo[151395]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:57 compute-0 sudo[151422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:35:57 compute-0 sudo[151422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:35:57 compute-0 sudo[151422]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:57 compute-0 sudo[151468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:35:57 compute-0 sudo[151468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:35:57 compute-0 sudo[151548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmnssdhynxyyhlxptgltipxjlrzhdpcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157355.3362515-581-6683148340694/AnsiballZ_systemd.py'
Oct 11 04:35:57 compute-0 sudo[151548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:35:57 compute-0 ceph-mon[74243]: pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:57 compute-0 sudo[151468]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:35:57 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:35:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:35:57 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:35:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:35:57 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:35:57 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 1a37f83c-9256-4777-936d-57103a21ff16 does not exist
Oct 11 04:35:57 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev ce914d19-ecd3-42ee-9b61-b8913f2dd262 does not exist
Oct 11 04:35:57 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 20fa88aa-1748-4022-bbda-91d6fd9afc70 does not exist
Oct 11 04:35:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:35:57 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:35:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:35:57 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:35:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:35:57 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:35:58 compute-0 sudo[151583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:35:58 compute-0 sudo[151583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:35:58 compute-0 sudo[151583]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:58 compute-0 sudo[151608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:35:58 compute-0 sudo[151608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:35:58 compute-0 sudo[151608]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:58 compute-0 sudo[151633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:35:58 compute-0 sudo[151633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:35:58 compute-0 sudo[151633]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:58 compute-0 sudo[151658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:35:58 compute-0 sudo[151658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:35:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:35:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:35:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:35:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:35:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:35:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:35:58 compute-0 ceph-mon[74243]: pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:58 compute-0 podman[151723]: 2025-10-11 04:35:58.664105138 +0000 UTC m=+0.051940894 container create e18e915763c7584d37f3ea652e348ca07385bed12248c913f03b926a8f588488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:35:58 compute-0 python3.9[151552]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:35:58 compute-0 systemd[1]: Started libpod-conmon-e18e915763c7584d37f3ea652e348ca07385bed12248c913f03b926a8f588488.scope.
Oct 11 04:35:58 compute-0 podman[151723]: 2025-10-11 04:35:58.643932 +0000 UTC m=+0.031767766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:35:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:35:58 compute-0 systemd[1]: Reloading.
Oct 11 04:35:58 compute-0 podman[151723]: 2025-10-11 04:35:58.761818433 +0000 UTC m=+0.149654229 container init e18e915763c7584d37f3ea652e348ca07385bed12248c913f03b926a8f588488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:35:58 compute-0 podman[151723]: 2025-10-11 04:35:58.771168644 +0000 UTC m=+0.159004400 container start e18e915763c7584d37f3ea652e348ca07385bed12248c913f03b926a8f588488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:35:58 compute-0 podman[151723]: 2025-10-11 04:35:58.775018729 +0000 UTC m=+0.162854575 container attach e18e915763c7584d37f3ea652e348ca07385bed12248c913f03b926a8f588488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:35:58 compute-0 clever_hodgkin[151741]: 167 167
Oct 11 04:35:58 compute-0 podman[151723]: 2025-10-11 04:35:58.777542692 +0000 UTC m=+0.165378458 container died e18e915763c7584d37f3ea652e348ca07385bed12248c913f03b926a8f588488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:35:58 compute-0 systemd-sysv-generator[151786]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:35:58 compute-0 systemd-rc-local-generator[151779]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:35:59 compute-0 systemd[1]: libpod-e18e915763c7584d37f3ea652e348ca07385bed12248c913f03b926a8f588488.scope: Deactivated successfully.
Oct 11 04:35:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-db1e8dc36330839ac635b8de12d3616cd82ae6c8b47e1e5867e6310e86f2877d-merged.mount: Deactivated successfully.
Oct 11 04:35:59 compute-0 podman[151723]: 2025-10-11 04:35:59.052167157 +0000 UTC m=+0.440002943 container remove e18e915763c7584d37f3ea652e348ca07385bed12248c913f03b926a8f588488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:35:59 compute-0 systemd[1]: Starting ovn_controller container...
Oct 11 04:35:59 compute-0 systemd[1]: libpod-conmon-e18e915763c7584d37f3ea652e348ca07385bed12248c913f03b926a8f588488.scope: Deactivated successfully.
Oct 11 04:35:59 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:35:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a02f8e4ef092415d5449b08454e2e494f31ba935091c38616d6a0cf05d3e6a0/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 11 04:35:59 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb.
Oct 11 04:35:59 compute-0 podman[151797]: 2025-10-11 04:35:59.23803605 +0000 UTC m=+0.154758805 container init 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 04:35:59 compute-0 ovn_controller[151813]: + sudo -E kolla_set_configs
Oct 11 04:35:59 compute-0 podman[151797]: 2025-10-11 04:35:59.268031082 +0000 UTC m=+0.184753757 container start 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:35:59 compute-0 podman[151821]: 2025-10-11 04:35:59.275365113 +0000 UTC m=+0.072093683 container create 1e3e69770407e289b8ee531ec8fc8492903a34c3d6dfa3ed418934b40c8c05d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_nobel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:35:59 compute-0 edpm-start-podman-container[151797]: ovn_controller
Oct 11 04:35:59 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 11 04:35:59 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 11 04:35:59 compute-0 podman[151821]: 2025-10-11 04:35:59.234285678 +0000 UTC m=+0.031014278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:35:59 compute-0 podman[151834]: 2025-10-11 04:35:59.35012751 +0000 UTC m=+0.068029862 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:35:59 compute-0 systemd[1]: Started libpod-conmon-1e3e69770407e289b8ee531ec8fc8492903a34c3d6dfa3ed418934b40c8c05d1.scope.
Oct 11 04:35:59 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 11 04:35:59 compute-0 systemd[1]: 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb-705a0d35928b75fe.service: Main process exited, code=exited, status=1/FAILURE
Oct 11 04:35:59 compute-0 systemd[1]: 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb-705a0d35928b75fe.service: Failed with result 'exit-code'.
Oct 11 04:35:59 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 11 04:35:59 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:35:59 compute-0 systemd[151877]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 11 04:35:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b1c8b662ddba9bd3666e646d153f965601fa3c94b01bf58bdb6639842b6ff2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:35:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b1c8b662ddba9bd3666e646d153f965601fa3c94b01bf58bdb6639842b6ff2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:35:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b1c8b662ddba9bd3666e646d153f965601fa3c94b01bf58bdb6639842b6ff2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:35:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b1c8b662ddba9bd3666e646d153f965601fa3c94b01bf58bdb6639842b6ff2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:35:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b1c8b662ddba9bd3666e646d153f965601fa3c94b01bf58bdb6639842b6ff2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:35:59 compute-0 podman[151821]: 2025-10-11 04:35:59.401572462 +0000 UTC m=+0.198301062 container init 1e3e69770407e289b8ee531ec8fc8492903a34c3d6dfa3ed418934b40c8c05d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:35:59 compute-0 podman[151821]: 2025-10-11 04:35:59.418161252 +0000 UTC m=+0.214889862 container start 1e3e69770407e289b8ee531ec8fc8492903a34c3d6dfa3ed418934b40c8c05d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_nobel, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:35:59 compute-0 podman[151821]: 2025-10-11 04:35:59.422631632 +0000 UTC m=+0.219360242 container attach 1e3e69770407e289b8ee531ec8fc8492903a34c3d6dfa3ed418934b40c8c05d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_nobel, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:35:59 compute-0 edpm-start-podman-container[151793]: Creating additional drop-in dependency for "ovn_controller" (086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb)
Oct 11 04:35:59 compute-0 systemd[1]: Reloading.
Oct 11 04:35:59 compute-0 systemd[151877]: Queued start job for default target Main User Target.
Oct 11 04:35:59 compute-0 systemd[151877]: Created slice User Application Slice.
Oct 11 04:35:59 compute-0 systemd[151877]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 11 04:35:59 compute-0 systemd[151877]: Started Daily Cleanup of User's Temporary Directories.
Oct 11 04:35:59 compute-0 systemd[151877]: Reached target Paths.
Oct 11 04:35:59 compute-0 systemd[151877]: Reached target Timers.
Oct 11 04:35:59 compute-0 systemd[151877]: Starting D-Bus User Message Bus Socket...
Oct 11 04:35:59 compute-0 systemd[151877]: Starting Create User's Volatile Files and Directories...
Oct 11 04:35:59 compute-0 systemd[151877]: Listening on D-Bus User Message Bus Socket.
Oct 11 04:35:59 compute-0 systemd[151877]: Reached target Sockets.
Oct 11 04:35:59 compute-0 systemd[151877]: Finished Create User's Volatile Files and Directories.
Oct 11 04:35:59 compute-0 systemd[151877]: Reached target Basic System.
Oct 11 04:35:59 compute-0 systemd[151877]: Reached target Main User Target.
Oct 11 04:35:59 compute-0 systemd[151877]: Startup finished in 148ms.
Oct 11 04:35:59 compute-0 systemd-rc-local-generator[151925]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:35:59 compute-0 systemd-sysv-generator[151928]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:35:59 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 11 04:35:59 compute-0 systemd[1]: Started ovn_controller container.
Oct 11 04:35:59 compute-0 systemd[1]: Started Session c1 of User root.
Oct 11 04:35:59 compute-0 sudo[151548]: pam_unix(sudo:session): session closed for user root
Oct 11 04:35:59 compute-0 ovn_controller[151813]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 04:35:59 compute-0 ovn_controller[151813]: INFO:__main__:Validating config file
Oct 11 04:35:59 compute-0 ovn_controller[151813]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 04:35:59 compute-0 ovn_controller[151813]: INFO:__main__:Writing out command to execute
Oct 11 04:35:59 compute-0 ovn_controller[151813]: ++ cat /run_command
Oct 11 04:35:59 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 11 04:35:59 compute-0 ovn_controller[151813]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 11 04:35:59 compute-0 ovn_controller[151813]: + ARGS=
Oct 11 04:35:59 compute-0 ovn_controller[151813]: + sudo kolla_copy_cacerts
Oct 11 04:35:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:59 compute-0 systemd[1]: Started Session c2 of User root.
Oct 11 04:35:59 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 11 04:35:59 compute-0 ovn_controller[151813]: + [[ ! -n '' ]]
Oct 11 04:35:59 compute-0 ovn_controller[151813]: + . kolla_extend_start
Oct 11 04:35:59 compute-0 ovn_controller[151813]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 11 04:35:59 compute-0 ovn_controller[151813]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 11 04:35:59 compute-0 ovn_controller[151813]: + umask 0022
Oct 11 04:35:59 compute-0 ovn_controller[151813]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 11 04:35:59 compute-0 ovn_controller[151813]: 2025-10-11T04:35:59Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 11 04:35:59 compute-0 ovn_controller[151813]: 2025-10-11T04:35:59Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 11 04:35:59 compute-0 ovn_controller[151813]: 2025-10-11T04:35:59Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 11 04:35:59 compute-0 ovn_controller[151813]: 2025-10-11T04:35:59Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 11 04:35:59 compute-0 ovn_controller[151813]: 2025-10-11T04:35:59Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 11 04:35:59 compute-0 ovn_controller[151813]: 2025-10-11T04:35:59Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 11 04:35:59 compute-0 NetworkManager[44888]: <info>  [1760157359.9994] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct 11 04:36:00 compute-0 NetworkManager[44888]: <info>  [1760157360.0003] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:36:00 compute-0 NetworkManager[44888]: <info>  [1760157360.0019] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 11 04:36:00 compute-0 NetworkManager[44888]: <info>  [1760157360.0026] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct 11 04:36:00 compute-0 NetworkManager[44888]: <info>  [1760157360.0031] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 11 04:36:00 compute-0 kernel: br-int: entered promiscuous mode
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00022|main|INFO|OVS feature set changed, force recompute.
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 11 04:36:00 compute-0 NetworkManager[44888]: <info>  [1760157360.0433] manager: (ovn-208495-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct 11 04:36:00 compute-0 systemd-udevd[151998]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:36:00 compute-0 systemd-udevd[152001]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:36:00 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 11 04:36:00 compute-0 NetworkManager[44888]: <info>  [1760157360.0800] device (genev_sys_6081): carrier: link connected
Oct 11 04:36:00 compute-0 ovn_controller[151813]: 2025-10-11T04:36:00Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 11 04:36:00 compute-0 NetworkManager[44888]: <info>  [1760157360.0803] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Oct 11 04:36:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:00 compute-0 sudo[152120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxsayntquvmjauaxmyqoyxgcqxkslsvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157360.063309-609-121725479678233/AnsiballZ_command.py'
Oct 11 04:36:00 compute-0 sudo[152120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:00 compute-0 kind_nobel[151875]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:36:00 compute-0 kind_nobel[151875]: --> relative data size: 1.0
Oct 11 04:36:00 compute-0 kind_nobel[151875]: --> All data devices are unavailable
Oct 11 04:36:00 compute-0 systemd[1]: libpod-1e3e69770407e289b8ee531ec8fc8492903a34c3d6dfa3ed418934b40c8c05d1.scope: Deactivated successfully.
Oct 11 04:36:00 compute-0 systemd[1]: libpod-1e3e69770407e289b8ee531ec8fc8492903a34c3d6dfa3ed418934b40c8c05d1.scope: Consumed 1.062s CPU time.
Oct 11 04:36:00 compute-0 podman[151821]: 2025-10-11 04:36:00.559104937 +0000 UTC m=+1.355833547 container died 1e3e69770407e289b8ee531ec8fc8492903a34c3d6dfa3ed418934b40c8c05d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_nobel, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:36:00 compute-0 python3.9[152124]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:36:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-35b1c8b662ddba9bd3666e646d153f965601fa3c94b01bf58bdb6639842b6ff2-merged.mount: Deactivated successfully.
Oct 11 04:36:00 compute-0 ovs-vsctl[152142]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 11 04:36:00 compute-0 sudo[152120]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:00 compute-0 podman[151821]: 2025-10-11 04:36:00.630559043 +0000 UTC m=+1.427287613 container remove 1e3e69770407e289b8ee531ec8fc8492903a34c3d6dfa3ed418934b40c8c05d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_nobel, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:36:00 compute-0 systemd[1]: libpod-conmon-1e3e69770407e289b8ee531ec8fc8492903a34c3d6dfa3ed418934b40c8c05d1.scope: Deactivated successfully.
Oct 11 04:36:00 compute-0 sudo[151658]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:00 compute-0 sudo[152144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:36:00 compute-0 sudo[152144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:36:00 compute-0 sudo[152144]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:00 compute-0 sudo[152192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:36:00 compute-0 sudo[152192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:36:00 compute-0 sudo[152192]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:00 compute-0 sudo[152224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:36:00 compute-0 sudo[152224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:36:00 compute-0 sudo[152224]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:00 compute-0 sudo[152268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:36:00 compute-0 sudo[152268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:36:01 compute-0 sudo[152419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crylwjewrsjsfcjwplowqhjpfbxqjsbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157360.8174736-617-95154792792265/AnsiballZ_command.py'
Oct 11 04:36:01 compute-0 sudo[152419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:01 compute-0 ceph-mon[74243]: pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:01 compute-0 podman[152436]: 2025-10-11 04:36:01.367431552 +0000 UTC m=+0.072357489 container create e0d20beb48456115e5def8b8e4eab60c3ba90bc4ecee77d906db9ef66b05242b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kirch, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:36:01 compute-0 systemd[1]: Started libpod-conmon-e0d20beb48456115e5def8b8e4eab60c3ba90bc4ecee77d906db9ef66b05242b.scope.
Oct 11 04:36:01 compute-0 podman[152436]: 2025-10-11 04:36:01.333042312 +0000 UTC m=+0.037968309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:36:01 compute-0 python3.9[152431]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:36:01 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:36:01 compute-0 ovs-vsctl[152456]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 11 04:36:01 compute-0 podman[152436]: 2025-10-11 04:36:01.476011175 +0000 UTC m=+0.180937132 container init e0d20beb48456115e5def8b8e4eab60c3ba90bc4ecee77d906db9ef66b05242b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:36:01 compute-0 podman[152436]: 2025-10-11 04:36:01.486611377 +0000 UTC m=+0.191537314 container start e0d20beb48456115e5def8b8e4eab60c3ba90bc4ecee77d906db9ef66b05242b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:36:01 compute-0 sudo[152419]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:01 compute-0 podman[152436]: 2025-10-11 04:36:01.492766409 +0000 UTC m=+0.197692416 container attach e0d20beb48456115e5def8b8e4eab60c3ba90bc4ecee77d906db9ef66b05242b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kirch, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct 11 04:36:01 compute-0 kind_kirch[152452]: 167 167
Oct 11 04:36:01 compute-0 systemd[1]: libpod-e0d20beb48456115e5def8b8e4eab60c3ba90bc4ecee77d906db9ef66b05242b.scope: Deactivated successfully.
Oct 11 04:36:01 compute-0 podman[152436]: 2025-10-11 04:36:01.494502962 +0000 UTC m=+0.199428899 container died e0d20beb48456115e5def8b8e4eab60c3ba90bc4ecee77d906db9ef66b05242b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kirch, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:36:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1587f2b5292221d0413a7a6347998ac94e810321256115d63cde755fc55c182-merged.mount: Deactivated successfully.
Oct 11 04:36:01 compute-0 podman[152436]: 2025-10-11 04:36:01.542587321 +0000 UTC m=+0.247513238 container remove e0d20beb48456115e5def8b8e4eab60c3ba90bc4ecee77d906db9ef66b05242b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kirch, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:36:01 compute-0 systemd[1]: libpod-conmon-e0d20beb48456115e5def8b8e4eab60c3ba90bc4ecee77d906db9ef66b05242b.scope: Deactivated successfully.
Oct 11 04:36:01 compute-0 podman[152502]: 2025-10-11 04:36:01.747761301 +0000 UTC m=+0.047130646 container create ef103f9f90f8fc24ead4a40a14f118fddc4ff4ec5c8149fe2c7f282e69fe14fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:36:01 compute-0 systemd[1]: Started libpod-conmon-ef103f9f90f8fc24ead4a40a14f118fddc4ff4ec5c8149fe2c7f282e69fe14fc.scope.
Oct 11 04:36:01 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08306e614c4031358b3cee37259da6541641c7d7dcd0a300a04c8dba657f779/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08306e614c4031358b3cee37259da6541641c7d7dcd0a300a04c8dba657f779/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08306e614c4031358b3cee37259da6541641c7d7dcd0a300a04c8dba657f779/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08306e614c4031358b3cee37259da6541641c7d7dcd0a300a04c8dba657f779/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:01 compute-0 podman[152502]: 2025-10-11 04:36:01.728624048 +0000 UTC m=+0.027993393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:36:01 compute-0 podman[152502]: 2025-10-11 04:36:01.831711845 +0000 UTC m=+0.131081190 container init ef103f9f90f8fc24ead4a40a14f118fddc4ff4ec5c8149fe2c7f282e69fe14fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lalande, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:36:01 compute-0 podman[152502]: 2025-10-11 04:36:01.840126093 +0000 UTC m=+0.139495418 container start ef103f9f90f8fc24ead4a40a14f118fddc4ff4ec5c8149fe2c7f282e69fe14fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lalande, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:36:01 compute-0 podman[152502]: 2025-10-11 04:36:01.843531518 +0000 UTC m=+0.142900843 container attach ef103f9f90f8fc24ead4a40a14f118fddc4ff4ec5c8149fe2c7f282e69fe14fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lalande, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:36:02 compute-0 sudo[152648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljjngsiiyfhxfzukkjjfslwvzebymmwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157361.895507-631-26379338166586/AnsiballZ_command.py'
Oct 11 04:36:02 compute-0 sudo[152648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:02 compute-0 python3.9[152650]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:36:02 compute-0 ovs-vsctl[152651]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 11 04:36:02 compute-0 sudo[152648]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:02 compute-0 boring_lalande[152518]: {
Oct 11 04:36:02 compute-0 boring_lalande[152518]:     "0": [
Oct 11 04:36:02 compute-0 boring_lalande[152518]:         {
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "devices": [
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "/dev/loop3"
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             ],
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "lv_name": "ceph_lv0",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "lv_size": "21470642176",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "name": "ceph_lv0",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "tags": {
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.cluster_name": "ceph",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.crush_device_class": "",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.encrypted": "0",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.osd_id": "0",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.type": "block",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.vdo": "0"
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             },
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "type": "block",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "vg_name": "ceph_vg0"
Oct 11 04:36:02 compute-0 boring_lalande[152518]:         }
Oct 11 04:36:02 compute-0 boring_lalande[152518]:     ],
Oct 11 04:36:02 compute-0 boring_lalande[152518]:     "1": [
Oct 11 04:36:02 compute-0 boring_lalande[152518]:         {
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "devices": [
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "/dev/loop4"
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             ],
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "lv_name": "ceph_lv1",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "lv_size": "21470642176",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "name": "ceph_lv1",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "tags": {
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.cluster_name": "ceph",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.crush_device_class": "",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.encrypted": "0",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.osd_id": "1",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.type": "block",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.vdo": "0"
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             },
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "type": "block",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "vg_name": "ceph_vg1"
Oct 11 04:36:02 compute-0 boring_lalande[152518]:         }
Oct 11 04:36:02 compute-0 boring_lalande[152518]:     ],
Oct 11 04:36:02 compute-0 boring_lalande[152518]:     "2": [
Oct 11 04:36:02 compute-0 boring_lalande[152518]:         {
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "devices": [
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "/dev/loop5"
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             ],
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "lv_name": "ceph_lv2",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "lv_size": "21470642176",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "name": "ceph_lv2",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "tags": {
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.cluster_name": "ceph",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.crush_device_class": "",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.encrypted": "0",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.osd_id": "2",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.type": "block",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:                 "ceph.vdo": "0"
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             },
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "type": "block",
Oct 11 04:36:02 compute-0 boring_lalande[152518]:             "vg_name": "ceph_vg2"
Oct 11 04:36:02 compute-0 boring_lalande[152518]:         }
Oct 11 04:36:02 compute-0 boring_lalande[152518]:     ]
Oct 11 04:36:02 compute-0 boring_lalande[152518]: }
Oct 11 04:36:02 compute-0 systemd[1]: libpod-ef103f9f90f8fc24ead4a40a14f118fddc4ff4ec5c8149fe2c7f282e69fe14fc.scope: Deactivated successfully.
Oct 11 04:36:02 compute-0 podman[152502]: 2025-10-11 04:36:02.62385371 +0000 UTC m=+0.923223045 container died ef103f9f90f8fc24ead4a40a14f118fddc4ff4ec5c8149fe2c7f282e69fe14fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b08306e614c4031358b3cee37259da6541641c7d7dcd0a300a04c8dba657f779-merged.mount: Deactivated successfully.
Oct 11 04:36:02 compute-0 podman[152502]: 2025-10-11 04:36:02.685562955 +0000 UTC m=+0.984932290 container remove ef103f9f90f8fc24ead4a40a14f118fddc4ff4ec5c8149fe2c7f282e69fe14fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:36:02 compute-0 systemd[1]: libpod-conmon-ef103f9f90f8fc24ead4a40a14f118fddc4ff4ec5c8149fe2c7f282e69fe14fc.scope: Deactivated successfully.
Oct 11 04:36:02 compute-0 sudo[152268]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:02 compute-0 sudo[152693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:36:02 compute-0 sudo[152693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:36:02 compute-0 sudo[152693]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:02 compute-0 sudo[152718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:36:02 compute-0 sudo[152718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:36:02 compute-0 sudo[152718]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:02 compute-0 sudo[152743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:36:02 compute-0 sudo[152743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:36:02 compute-0 sudo[152743]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:02 compute-0 sshd-session[140009]: Connection closed by 192.168.122.30 port 56950
Oct 11 04:36:02 compute-0 sshd-session[140006]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:36:02 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Oct 11 04:36:02 compute-0 systemd[1]: session-47.scope: Consumed 1min 5.659s CPU time.
Oct 11 04:36:02 compute-0 systemd-logind[801]: Session 47 logged out. Waiting for processes to exit.
Oct 11 04:36:02 compute-0 systemd-logind[801]: Removed session 47.
Oct 11 04:36:03 compute-0 sudo[152768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:36:03 compute-0 sudo[152768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:36:03 compute-0 ceph-mon[74243]: pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:03 compute-0 podman[152835]: 2025-10-11 04:36:03.425120271 +0000 UTC m=+0.047826113 container create e1d662d7bd452dc6aa1f0f3fcf6bfbe6088a36202bcb4efdcfc47d73f94bc257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_fermat, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:36:03 compute-0 systemd[1]: Started libpod-conmon-e1d662d7bd452dc6aa1f0f3fcf6bfbe6088a36202bcb4efdcfc47d73f94bc257.scope.
Oct 11 04:36:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:36:03 compute-0 podman[152835]: 2025-10-11 04:36:03.405096246 +0000 UTC m=+0.027802118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:36:03 compute-0 podman[152835]: 2025-10-11 04:36:03.514143801 +0000 UTC m=+0.136849693 container init e1d662d7bd452dc6aa1f0f3fcf6bfbe6088a36202bcb4efdcfc47d73f94bc257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:36:03 compute-0 podman[152835]: 2025-10-11 04:36:03.521431401 +0000 UTC m=+0.144137213 container start e1d662d7bd452dc6aa1f0f3fcf6bfbe6088a36202bcb4efdcfc47d73f94bc257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_fermat, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:36:03 compute-0 podman[152835]: 2025-10-11 04:36:03.525408949 +0000 UTC m=+0.148114851 container attach e1d662d7bd452dc6aa1f0f3fcf6bfbe6088a36202bcb4efdcfc47d73f94bc257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:36:03 compute-0 funny_fermat[152851]: 167 167
Oct 11 04:36:03 compute-0 systemd[1]: libpod-e1d662d7bd452dc6aa1f0f3fcf6bfbe6088a36202bcb4efdcfc47d73f94bc257.scope: Deactivated successfully.
Oct 11 04:36:03 compute-0 conmon[152851]: conmon e1d662d7bd452dc6aa1f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1d662d7bd452dc6aa1f0f3fcf6bfbe6088a36202bcb4efdcfc47d73f94bc257.scope/container/memory.events
Oct 11 04:36:03 compute-0 podman[152835]: 2025-10-11 04:36:03.530269699 +0000 UTC m=+0.152975541 container died e1d662d7bd452dc6aa1f0f3fcf6bfbe6088a36202bcb4efdcfc47d73f94bc257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_fermat, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 04:36:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa8d108b2ea6183832aca83bc080b240ece8320f7ee5f9edbd35882987fadd6f-merged.mount: Deactivated successfully.
Oct 11 04:36:03 compute-0 podman[152835]: 2025-10-11 04:36:03.578953623 +0000 UTC m=+0.201659435 container remove e1d662d7bd452dc6aa1f0f3fcf6bfbe6088a36202bcb4efdcfc47d73f94bc257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:36:03 compute-0 systemd[1]: libpod-conmon-e1d662d7bd452dc6aa1f0f3fcf6bfbe6088a36202bcb4efdcfc47d73f94bc257.scope: Deactivated successfully.
Oct 11 04:36:03 compute-0 podman[152875]: 2025-10-11 04:36:03.741833458 +0000 UTC m=+0.053943894 container create 1618063389ca8dd4c66ed789a31c03eaacc4e38b7f5387b94bc73cbfb9a1b3b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:36:03 compute-0 systemd[1]: Started libpod-conmon-1618063389ca8dd4c66ed789a31c03eaacc4e38b7f5387b94bc73cbfb9a1b3b2.scope.
Oct 11 04:36:03 compute-0 podman[152875]: 2025-10-11 04:36:03.712516514 +0000 UTC m=+0.024626970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:36:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a3a81673f72cb8aec1fda845b0e8aa0cbde325409ff5e561edae8e9331cc1bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a3a81673f72cb8aec1fda845b0e8aa0cbde325409ff5e561edae8e9331cc1bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a3a81673f72cb8aec1fda845b0e8aa0cbde325409ff5e561edae8e9331cc1bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a3a81673f72cb8aec1fda845b0e8aa0cbde325409ff5e561edae8e9331cc1bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:03 compute-0 podman[152875]: 2025-10-11 04:36:03.836057127 +0000 UTC m=+0.148167533 container init 1618063389ca8dd4c66ed789a31c03eaacc4e38b7f5387b94bc73cbfb9a1b3b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:36:03 compute-0 podman[152875]: 2025-10-11 04:36:03.852847402 +0000 UTC m=+0.164957798 container start 1618063389ca8dd4c66ed789a31c03eaacc4e38b7f5387b94bc73cbfb9a1b3b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:36:03 compute-0 podman[152875]: 2025-10-11 04:36:03.856717777 +0000 UTC m=+0.168828173 container attach 1618063389ca8dd4c66ed789a31c03eaacc4e38b7f5387b94bc73cbfb9a1b3b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:36:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]: {
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:         "osd_id": 1,
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:         "type": "bluestore"
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:     },
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:         "osd_id": 0,
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:         "type": "bluestore"
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:     },
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:         "osd_id": 2,
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:         "type": "bluestore"
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]:     }
Oct 11 04:36:04 compute-0 hardcore_heisenberg[152891]: }
Oct 11 04:36:04 compute-0 systemd[1]: libpod-1618063389ca8dd4c66ed789a31c03eaacc4e38b7f5387b94bc73cbfb9a1b3b2.scope: Deactivated successfully.
Oct 11 04:36:04 compute-0 podman[152875]: 2025-10-11 04:36:04.885201783 +0000 UTC m=+1.197312189 container died 1618063389ca8dd4c66ed789a31c03eaacc4e38b7f5387b94bc73cbfb9a1b3b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:36:04 compute-0 systemd[1]: libpod-1618063389ca8dd4c66ed789a31c03eaacc4e38b7f5387b94bc73cbfb9a1b3b2.scope: Consumed 1.040s CPU time.
Oct 11 04:36:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a3a81673f72cb8aec1fda845b0e8aa0cbde325409ff5e561edae8e9331cc1bd-merged.mount: Deactivated successfully.
Oct 11 04:36:04 compute-0 podman[152875]: 2025-10-11 04:36:04.946915168 +0000 UTC m=+1.259025564 container remove 1618063389ca8dd4c66ed789a31c03eaacc4e38b7f5387b94bc73cbfb9a1b3b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:36:04 compute-0 systemd[1]: libpod-conmon-1618063389ca8dd4c66ed789a31c03eaacc4e38b7f5387b94bc73cbfb9a1b3b2.scope: Deactivated successfully.
Oct 11 04:36:04 compute-0 sudo[152768]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:36:04 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:36:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:36:04 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:36:04 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 4a548096-1b7b-4146-b371-947601a2a601 does not exist
Oct 11 04:36:04 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 74d144f6-c33f-403f-949f-120ab287f6c6 does not exist
Oct 11 04:36:05 compute-0 sudo[152936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:36:05 compute-0 sudo[152936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:36:05 compute-0 sudo[152936]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:05 compute-0 sudo[152961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:36:05 compute-0 sudo[152961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:36:05 compute-0 sudo[152961]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:05 compute-0 ceph-mon[74243]: pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:05 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:36:05 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:36:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:07 compute-0 ceph-mon[74243]: pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:08 compute-0 sshd-session[152986]: Accepted publickey for zuul from 192.168.122.30 port 47124 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:36:08 compute-0 systemd-logind[801]: New session 49 of user zuul.
Oct 11 04:36:08 compute-0 systemd[1]: Started Session 49 of User zuul.
Oct 11 04:36:08 compute-0 sshd-session[152986]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:36:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:09 compute-0 ceph-mon[74243]: pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:09 compute-0 python3.9[153139]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:36:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:10 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 11 04:36:10 compute-0 systemd[151877]: Activating special unit Exit the Session...
Oct 11 04:36:10 compute-0 systemd[151877]: Stopped target Main User Target.
Oct 11 04:36:10 compute-0 systemd[151877]: Stopped target Basic System.
Oct 11 04:36:10 compute-0 systemd[151877]: Stopped target Paths.
Oct 11 04:36:10 compute-0 systemd[151877]: Stopped target Sockets.
Oct 11 04:36:10 compute-0 systemd[151877]: Stopped target Timers.
Oct 11 04:36:10 compute-0 systemd[151877]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 11 04:36:10 compute-0 systemd[151877]: Closed D-Bus User Message Bus Socket.
Oct 11 04:36:10 compute-0 systemd[151877]: Stopped Create User's Volatile Files and Directories.
Oct 11 04:36:10 compute-0 systemd[151877]: Removed slice User Application Slice.
Oct 11 04:36:10 compute-0 systemd[151877]: Reached target Shutdown.
Oct 11 04:36:10 compute-0 systemd[151877]: Finished Exit the Session.
Oct 11 04:36:10 compute-0 systemd[151877]: Reached target Exit the Session.
Oct 11 04:36:10 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 11 04:36:10 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 11 04:36:10 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 11 04:36:10 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 11 04:36:10 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 11 04:36:10 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 11 04:36:10 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 11 04:36:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:10 compute-0 sudo[153295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orsfqckdpqylishstlwiavsyvsvuybdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157370.0572574-34-16992451287428/AnsiballZ_file.py'
Oct 11 04:36:10 compute-0 sudo[153295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:10 compute-0 python3.9[153297]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:36:10 compute-0 sudo[153295]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:11 compute-0 sudo[153447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlptoocfngzolpowgzednocslavkqgev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157371.0049095-34-87027006841781/AnsiballZ_file.py'
Oct 11 04:36:11 compute-0 sudo[153447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:11 compute-0 ceph-mon[74243]: pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:11 compute-0 python3.9[153449]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:36:11 compute-0 sudo[153447]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:12 compute-0 sudo[153599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtujunfoxgvtqsyeootpfjsjztcpawjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157371.701674-34-44020508536372/AnsiballZ_file.py'
Oct 11 04:36:12 compute-0 sudo[153599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:12 compute-0 python3.9[153601]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:36:12 compute-0 sudo[153599]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:12 compute-0 sudo[153751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlcvqwysosszqtoenoytsozxzajahgim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157372.5077658-34-153075884178574/AnsiballZ_file.py'
Oct 11 04:36:12 compute-0 sudo[153751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:13 compute-0 python3.9[153753]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:36:13 compute-0 sudo[153751]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:13 compute-0 ceph-mon[74243]: pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:13 compute-0 sudo[153903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxytgjxnahmddcxtbgtjqquuxkwebfjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157373.2880723-34-242459638074712/AnsiballZ_file.py'
Oct 11 04:36:13 compute-0 sudo[153903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:13 compute-0 python3.9[153905]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:36:13 compute-0 sudo[153903]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:14 compute-0 python3.9[154055]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:36:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:15 compute-0 ceph-mon[74243]: pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:15 compute-0 sudo[154205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krguwfngkdzdkdglbfkrcofjxhklfnjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157375.1072783-78-216231716341795/AnsiballZ_seboolean.py'
Oct 11 04:36:15 compute-0 sudo[154205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:15 compute-0 python3.9[154207]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 11 04:36:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:16 compute-0 sudo[154205]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:17 compute-0 python3.9[154357]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:36:17 compute-0 ceph-mon[74243]: pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:18 compute-0 python3.9[154478]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760157376.6634567-86-271482189489724/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:36:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:18 compute-0 python3.9[154629]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:36:19 compute-0 ceph-mon[74243]: pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:19 compute-0 python3.9[154750]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760157378.3421404-101-212970815230620/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:36:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:20 compute-0 sudo[154900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlnhhxbunddcciubspsvkfpwwpizpzjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157379.788636-118-243604953808651/AnsiballZ_setup.py'
Oct 11 04:36:20 compute-0 sudo[154900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:20 compute-0 python3.9[154902]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:36:20 compute-0 sudo[154900]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:21 compute-0 sudo[154984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frpcbfbkzdtjghjwudoxnqvxichdazaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157379.788636-118-243604953808651/AnsiballZ_dnf.py'
Oct 11 04:36:21 compute-0 sudo[154984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:21 compute-0 python3.9[154986]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:36:21 compute-0 ceph-mon[74243]: pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:22 compute-0 sudo[154984]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:23 compute-0 ceph-mon[74243]: pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:23 compute-0 sudo[155137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytefuksjjkqdnrgicrdqizqvdayyweab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157382.8216827-130-29717047842187/AnsiballZ_systemd.py'
Oct 11 04:36:23 compute-0 sudo[155137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:23 compute-0 python3.9[155139]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 04:36:23 compute-0 sudo[155137]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:24 compute-0 python3.9[155292]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:36:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:25 compute-0 python3.9[155413]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760157384.081189-138-268780848008100/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:36:25 compute-0 ceph-mon[74243]: pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:25 compute-0 python3.9[155563]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:36:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:36:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:36:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:36:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:36:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:36:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:36:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:26 compute-0 python3.9[155684]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760157385.2904847-138-134128957397240/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:36:27 compute-0 ceph-mon[74243]: pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:27 compute-0 python3.9[155834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:36:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:28 compute-0 python3.9[155955]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760157387.1582992-182-164327671954501/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:36:29 compute-0 python3.9[156105]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:36:29 compute-0 ceph-mon[74243]: pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:29 compute-0 ovn_controller[151813]: 2025-10-11T04:36:29Z|00025|memory|INFO|16128 kB peak resident set size after 29.8 seconds
Oct 11 04:36:29 compute-0 ovn_controller[151813]: 2025-10-11T04:36:29Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Oct 11 04:36:29 compute-0 podman[156200]: 2025-10-11 04:36:29.736413562 +0000 UTC m=+0.132554241 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, org.label-schema.build-date=20251009)
Oct 11 04:36:29 compute-0 python3.9[156241]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760157388.6209743-182-268237291345967/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:36:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:30 compute-0 python3.9[156402]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:36:31 compute-0 sudo[156554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cohjncqsnkvzmzrqaetttiklvbcfkyyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157390.9522796-220-241709076058993/AnsiballZ_file.py'
Oct 11 04:36:31 compute-0 sudo[156554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:31 compute-0 ceph-mon[74243]: pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:31 compute-0 python3.9[156556]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:36:31 compute-0 sudo[156554]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:32 compute-0 sudo[156706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxgjfzrrtoivpiofvxcgkwpppmzmqexj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157391.8782597-228-26801406925686/AnsiballZ_stat.py'
Oct 11 04:36:32 compute-0 sudo[156706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:32 compute-0 python3.9[156708]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:36:32 compute-0 sudo[156706]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:32 compute-0 sudo[156784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzzahijrjjxrvqfsgytecckrmmbppysk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157391.8782597-228-26801406925686/AnsiballZ_file.py'
Oct 11 04:36:32 compute-0 sudo[156784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:32 compute-0 python3.9[156786]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:36:32 compute-0 sudo[156784]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:33 compute-0 sudo[156936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpctperxtqpbmnwtfpvtvszqwtufghvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157393.1219363-228-167832627669247/AnsiballZ_stat.py'
Oct 11 04:36:33 compute-0 sudo[156936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:33 compute-0 ceph-mon[74243]: pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:33 compute-0 python3.9[156938]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:36:33 compute-0 sudo[156936]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:33 compute-0 sudo[157014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywpixvqlaypaoesqawjzqxvhnihkktbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157393.1219363-228-167832627669247/AnsiballZ_file.py'
Oct 11 04:36:33 compute-0 sudo[157014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:34 compute-0 python3.9[157016]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:36:34 compute-0 sudo[157014]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:34 compute-0 sudo[157166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-couzvvzqvtomeuinkdyalsawrgfsscwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157394.346861-251-25266505904064/AnsiballZ_file.py'
Oct 11 04:36:34 compute-0 sudo[157166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:34 compute-0 python3.9[157168]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:36:34 compute-0 sudo[157166]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:35 compute-0 sudo[157318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmflatreijpvxpgzspphjnmuhnmnvhyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157395.091303-259-74237914173708/AnsiballZ_stat.py'
Oct 11 04:36:35 compute-0 sudo[157318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:35 compute-0 ceph-mon[74243]: pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:35 compute-0 python3.9[157320]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:36:35 compute-0 sudo[157318]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:35 compute-0 sudo[157396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkhlfyywecppscmiumavwovazqxyaxek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157395.091303-259-74237914173708/AnsiballZ_file.py'
Oct 11 04:36:35 compute-0 sudo[157396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:36 compute-0 python3.9[157398]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:36:36 compute-0 sudo[157396]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:36 compute-0 ceph-mon[74243]: pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:36 compute-0 sudo[157548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdvossjwpxsvinnrkabhyhpjktncvixc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157396.413065-271-35223728822341/AnsiballZ_stat.py'
Oct 11 04:36:36 compute-0 sudo[157548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:37 compute-0 python3.9[157550]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:36:37 compute-0 sudo[157548]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:37 compute-0 sudo[157626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqjeuqjegoevfwnlyncqaviucflibvdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157396.413065-271-35223728822341/AnsiballZ_file.py'
Oct 11 04:36:37 compute-0 sudo[157626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:37 compute-0 python3.9[157628]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:36:37 compute-0 sudo[157626]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:38 compute-0 sudo[157778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdkusthjztolvgszydmrgzdfgjpsqsok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157397.8215566-283-230818476104044/AnsiballZ_systemd.py'
Oct 11 04:36:38 compute-0 sudo[157778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:38 compute-0 python3.9[157780]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:36:38 compute-0 systemd[1]: Reloading.
Oct 11 04:36:38 compute-0 systemd-rc-local-generator[157810]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:36:38 compute-0 systemd-sysv-generator[157814]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:36:38 compute-0 sudo[157778]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:39 compute-0 ceph-mon[74243]: pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:39 compute-0 sudo[157967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opqaicuqwxhrbzmgbdcxjdsrxmsmtkos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157399.2203755-291-187597717450350/AnsiballZ_stat.py'
Oct 11 04:36:39 compute-0 sudo[157967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:39 compute-0 python3.9[157969]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:36:39 compute-0 sudo[157967]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:40 compute-0 sudo[158045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cccuhzfcyipegljzgiyflgxpujufgsmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157399.2203755-291-187597717450350/AnsiballZ_file.py'
Oct 11 04:36:40 compute-0 sudo[158045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:40 compute-0 python3.9[158047]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:36:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:40 compute-0 sudo[158045]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:40 compute-0 sudo[158197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zajtbjatibrjfejsoklixwopfsdvwsyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157400.4859734-303-261787564291895/AnsiballZ_stat.py'
Oct 11 04:36:40 compute-0 sudo[158197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:40 compute-0 python3.9[158199]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:36:40 compute-0 sudo[158197]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:41 compute-0 sudo[158275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqtrpcuuowvprvgzqeozdpumrdwfhdpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157400.4859734-303-261787564291895/AnsiballZ_file.py'
Oct 11 04:36:41 compute-0 sudo[158275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:41 compute-0 ceph-mon[74243]: pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:41 compute-0 python3.9[158277]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:36:41 compute-0 sudo[158275]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:41 compute-0 sudo[158427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lukweqjszulsjcwrotzrekwxbezufwca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157401.562037-315-267395567927599/AnsiballZ_systemd.py'
Oct 11 04:36:41 compute-0 sudo[158427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:42 compute-0 python3.9[158429]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:36:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:42 compute-0 systemd[1]: Reloading.
Oct 11 04:36:42 compute-0 systemd-sysv-generator[158455]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:36:42 compute-0 systemd-rc-local-generator[158451]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:36:42 compute-0 systemd[1]: Starting Create netns directory...
Oct 11 04:36:42 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 04:36:42 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 04:36:42 compute-0 systemd[1]: Finished Create netns directory.
Oct 11 04:36:42 compute-0 sudo[158427]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:43 compute-0 sudo[158619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmanesqiaczlbqskmgecjqorgbgizmif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157402.9595754-325-235000917181292/AnsiballZ_file.py'
Oct 11 04:36:43 compute-0 sudo[158619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:43 compute-0 ceph-mon[74243]: pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:43 compute-0 python3.9[158621]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:36:43 compute-0 sudo[158619]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:44 compute-0 sudo[158771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvawvjpharkiybjljlpgsaxacrsqszbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157403.7284787-333-22143175156196/AnsiballZ_stat.py'
Oct 11 04:36:44 compute-0 sudo[158771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:44 compute-0 python3.9[158773]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:36:44 compute-0 sudo[158771]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:44 compute-0 sudo[158894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnwzzsfptiojcfrnsxnaktoingmspokx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157403.7284787-333-22143175156196/AnsiballZ_copy.py'
Oct 11 04:36:44 compute-0 sudo[158894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:44 compute-0 python3.9[158896]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760157403.7284787-333-22143175156196/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:36:44 compute-0 sudo[158894]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:45 compute-0 ceph-mon[74243]: pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:45 compute-0 sudo[159046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-overcuzlzdnomfkuxtfjtlhumzmgqich ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157405.4117029-350-103282576989532/AnsiballZ_file.py'
Oct 11 04:36:45 compute-0 sudo[159046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:46 compute-0 python3.9[159048]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:36:46 compute-0 sudo[159046]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:46 compute-0 sudo[159198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqagdkufsdvtstgogjvdhgjzkxhhswrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157406.2696602-358-90771106944804/AnsiballZ_stat.py'
Oct 11 04:36:46 compute-0 sudo[159198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:46 compute-0 python3.9[159200]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:36:46 compute-0 sudo[159198]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:47 compute-0 sudo[159321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-novjmzoxzbowugfogzxnbeosnnxrtpab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157406.2696602-358-90771106944804/AnsiballZ_copy.py'
Oct 11 04:36:47 compute-0 sudo[159321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:47 compute-0 ceph-mon[74243]: pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:47 compute-0 python3.9[159323]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760157406.2696602-358-90771106944804/.source.json _original_basename=.4wkrdc3d follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:36:47 compute-0 sudo[159321]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:48 compute-0 sudo[159473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgiznhapfkgmeiohfatwsusiuogiqzmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157407.7005801-373-248457436256186/AnsiballZ_file.py'
Oct 11 04:36:48 compute-0 sudo[159473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:48 compute-0 python3.9[159475]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:36:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:48 compute-0 sudo[159473]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:48 compute-0 sudo[159625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwttgambuguhwhddtktimfsnmxrmchjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157408.5158813-381-188775075935595/AnsiballZ_stat.py'
Oct 11 04:36:48 compute-0 sudo[159625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:49 compute-0 sudo[159625]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:49 compute-0 ceph-mon[74243]: pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:49 compute-0 sudo[159748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upftsnwoxsvklvznlwpxxtwtbrvvbduu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157408.5158813-381-188775075935595/AnsiballZ_copy.py'
Oct 11 04:36:49 compute-0 sudo[159748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:49 compute-0 sudo[159748]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:50 compute-0 sudo[159900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmdhpfimdvlctcukwpjwwvzfgnbcykec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157410.1768057-398-172717720852947/AnsiballZ_container_config_data.py'
Oct 11 04:36:50 compute-0 sudo[159900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:50 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:36:50 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5531 writes, 23K keys, 5531 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5531 writes, 838 syncs, 6.60 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5531 writes, 23K keys, 5531 commit groups, 1.0 writes per commit group, ingest: 18.48 MB, 0.03 MB/s
                                           Interval WAL: 5531 writes, 838 syncs, 6.60 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 04:36:50 compute-0 python3.9[159902]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 11 04:36:50 compute-0 sudo[159900]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:51 compute-0 ceph-mon[74243]: pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:51 compute-0 sudo[160052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vanijzksirpppjytectcypvtyhhuluca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157411.24234-407-72307739917069/AnsiballZ_container_config_hash.py'
Oct 11 04:36:51 compute-0 sudo[160052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:52 compute-0 python3.9[160054]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 04:36:52 compute-0 sudo[160052]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:52 compute-0 sudo[160204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhihgqaqpiiurkbmzvoybxemouvptans ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157412.285625-416-194557417816406/AnsiballZ_podman_container_info.py'
Oct 11 04:36:52 compute-0 sudo[160204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:53 compute-0 python3.9[160206]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 11 04:36:53 compute-0 sudo[160204]: pam_unix(sudo:session): session closed for user root
Oct 11 04:36:53 compute-0 ceph-mon[74243]: pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:54 compute-0 sudo[160381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgeyabsdjmnvksxabkavyraawepgungv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760157413.984346-429-108148413797196/AnsiballZ_edpm_container_manage.py'
Oct 11 04:36:54 compute-0 sudo[160381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:36:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:54 compute-0 python3[160383]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 04:36:55 compute-0 ceph-mon[74243]: pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:55 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:36:55 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 6872 writes, 28K keys, 6872 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6872 writes, 1211 syncs, 5.67 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6872 writes, 28K keys, 6872 commit groups, 1.0 writes per commit group, ingest: 19.51 MB, 0.03 MB/s
                                           Interval WAL: 6872 writes, 1211 syncs, 5.67 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:36:56
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'vms', 'backups', '.mgr', 'cephfs.cephfs.data']
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:36:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:57 compute-0 ceph-mon[74243]: pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:59 compute-0 ceph-mon[74243]: pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:00 compute-0 ceph-mon[74243]: pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:01 compute-0 anacron[20485]: Job `cron.weekly' started
Oct 11 04:37:01 compute-0 anacron[20485]: Job `cron.weekly' terminated
Oct 11 04:37:01 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:37:01 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5376 writes, 23K keys, 5376 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5376 writes, 765 syncs, 7.03 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5376 writes, 23K keys, 5376 commit groups, 1.0 writes per commit group, ingest: 18.24 MB, 0.03 MB/s
                                           Interval WAL: 5376 writes, 765 syncs, 7.03 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 04:37:01 compute-0 podman[160462]: 2025-10-11 04:37:01.696313167 +0000 UTC m=+1.346850636 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 04:37:02 compute-0 ceph-mgr[74542]: [devicehealth INFO root] Check health
Oct 11 04:37:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:02 compute-0 podman[160396]: 2025-10-11 04:37:02.935920155 +0000 UTC m=+7.961264999 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:37:03 compute-0 podman[160544]: 2025-10-11 04:37:03.096899047 +0000 UTC m=+0.066167744 container create 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 11 04:37:03 compute-0 podman[160544]: 2025-10-11 04:37:03.063082562 +0000 UTC m=+0.032351319 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:37:03 compute-0 python3[160383]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:37:03 compute-0 sudo[160381]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:03 compute-0 ceph-mon[74243]: pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:03 compute-0 sudo[160733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyxzojpqvunsypkstvggszwmnlattkiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157423.427025-437-132848214388929/AnsiballZ_stat.py'
Oct 11 04:37:03 compute-0 sudo[160733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:03 compute-0 python3.9[160735]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:37:03 compute-0 sudo[160733]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:04 compute-0 sudo[160887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwvjimesbkkpninuibqyvjhcqdnabuhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157424.2416785-446-94040223566312/AnsiballZ_file.py'
Oct 11 04:37:04 compute-0 sudo[160887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:04 compute-0 python3.9[160889]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:37:04 compute-0 sudo[160887]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:05 compute-0 sudo[160963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwwkamgluogbzlnultbwznlrbqcmgmpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157424.2416785-446-94040223566312/AnsiballZ_stat.py'
Oct 11 04:37:05 compute-0 sudo[160963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:05 compute-0 sudo[160966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:37:05 compute-0 sudo[160966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:05 compute-0 sudo[160966]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:05 compute-0 python3.9[160965]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:37:05 compute-0 sudo[160963]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:05 compute-0 sudo[160991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:37:05 compute-0 sudo[160991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:05 compute-0 sudo[160991]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:05 compute-0 sudo[161020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:37:05 compute-0 sudo[161020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:05 compute-0 sudo[161020]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:05 compute-0 ceph-mon[74243]: pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:05 compute-0 sudo[161084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 11 04:37:05 compute-0 sudo[161084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:05 compute-0 sudo[161084]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:37:05 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:37:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:37:05 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:37:05 compute-0 sudo[161246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzmxuzkjsxjscygaxkvkloesttmlyxuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157425.3364463-446-208932339280308/AnsiballZ_copy.py'
Oct 11 04:37:05 compute-0 sudo[161246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:05 compute-0 sudo[161224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:37:05 compute-0 sudo[161224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:05 compute-0 sudo[161224]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:06 compute-0 sudo[161262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:37:06 compute-0 sudo[161262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:06 compute-0 sudo[161262]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:37:06 compute-0 sudo[161287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:37:06 compute-0 sudo[161287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:06 compute-0 sudo[161287]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:06 compute-0 python3.9[161259]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760157425.3364463-446-208932339280308/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:37:06 compute-0 sudo[161246]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:06 compute-0 sudo[161312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:37:06 compute-0 sudo[161312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:06 compute-0 sudo[161425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqqaiiipuzakwdaisnffocndiohkmbow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157425.3364463-446-208932339280308/AnsiballZ_systemd.py'
Oct 11 04:37:06 compute-0 sudo[161425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:06 compute-0 sudo[161312]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:06 compute-0 python3.9[161427]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 04:37:06 compute-0 systemd[1]: Reloading.
Oct 11 04:37:06 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:37:06 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:37:06 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:37:06 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:37:06 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:37:06 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 0b497a9f-2f88-414a-b78c-46d1798411b2 does not exist
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 1f180b1e-d15b-4d05-9c7f-acd4ae518288 does not exist
Oct 11 04:37:06 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev ef2af8f0-3520-4300-a0de-f36a7f0bc07d does not exist
Oct 11 04:37:06 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:37:06 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:37:06 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:37:06 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:37:06 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:37:06 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:37:06 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:37:06 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:37:06 compute-0 ceph-mon[74243]: pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:06 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:37:06 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:37:06 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:37:06 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:37:06 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:37:06 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:37:06 compute-0 systemd-sysv-generator[161489]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:37:06 compute-0 systemd-rc-local-generator[161485]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:37:07 compute-0 sudo[161446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:37:07 compute-0 sudo[161446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:07 compute-0 sudo[161446]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:07 compute-0 sudo[161425]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:07 compute-0 sudo[161506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:37:07 compute-0 sudo[161506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:07 compute-0 sudo[161506]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:07 compute-0 sudo[161537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:37:07 compute-0 sudo[161537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:07 compute-0 sudo[161537]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:07 compute-0 sudo[161579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:37:07 compute-0 sudo[161579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:07 compute-0 sudo[161654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvdzznuyhojnujmogxtlrstirqsdqknm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157425.3364463-446-208932339280308/AnsiballZ_systemd.py'
Oct 11 04:37:07 compute-0 sudo[161654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:07 compute-0 podman[161697]: 2025-10-11 04:37:07.575574355 +0000 UTC m=+0.070365006 container create 0f351962fbd8c36cf8c283df21b3f8afd9820fe2a7441ee8e7d976aac47ef6a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:37:07 compute-0 systemd[1]: Started libpod-conmon-0f351962fbd8c36cf8c283df21b3f8afd9820fe2a7441ee8e7d976aac47ef6a4.scope.
Oct 11 04:37:07 compute-0 python3.9[161656]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:37:07 compute-0 podman[161697]: 2025-10-11 04:37:07.543549735 +0000 UTC m=+0.038340446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:37:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:37:07 compute-0 podman[161697]: 2025-10-11 04:37:07.679194812 +0000 UTC m=+0.173985423 container init 0f351962fbd8c36cf8c283df21b3f8afd9820fe2a7441ee8e7d976aac47ef6a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:37:07 compute-0 podman[161697]: 2025-10-11 04:37:07.688208734 +0000 UTC m=+0.182999385 container start 0f351962fbd8c36cf8c283df21b3f8afd9820fe2a7441ee8e7d976aac47ef6a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:37:07 compute-0 podman[161697]: 2025-10-11 04:37:07.692413898 +0000 UTC m=+0.187204539 container attach 0f351962fbd8c36cf8c283df21b3f8afd9820fe2a7441ee8e7d976aac47ef6a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:37:07 compute-0 nervous_heisenberg[161713]: 167 167
Oct 11 04:37:07 compute-0 systemd[1]: libpod-0f351962fbd8c36cf8c283df21b3f8afd9820fe2a7441ee8e7d976aac47ef6a4.scope: Deactivated successfully.
Oct 11 04:37:07 compute-0 conmon[161713]: conmon 0f351962fbd8c36cf8c2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f351962fbd8c36cf8c283df21b3f8afd9820fe2a7441ee8e7d976aac47ef6a4.scope/container/memory.events
Oct 11 04:37:07 compute-0 podman[161697]: 2025-10-11 04:37:07.697697028 +0000 UTC m=+0.192487689 container died 0f351962fbd8c36cf8c283df21b3f8afd9820fe2a7441ee8e7d976aac47ef6a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:37:07 compute-0 systemd[1]: Reloading.
Oct 11 04:37:07 compute-0 systemd-rc-local-generator[161759]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:37:07 compute-0 systemd-sysv-generator[161766]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:37:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2258f6c8a21a2c45e3600d6fc8eb3399cfca2ac093b3140f8596c320c6c88b6-merged.mount: Deactivated successfully.
Oct 11 04:37:08 compute-0 podman[161697]: 2025-10-11 04:37:08.012498463 +0000 UTC m=+0.507289114 container remove 0f351962fbd8c36cf8c283df21b3f8afd9820fe2a7441ee8e7d976aac47ef6a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:37:08 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Oct 11 04:37:08 compute-0 systemd[1]: libpod-conmon-0f351962fbd8c36cf8c283df21b3f8afd9820fe2a7441ee8e7d976aac47ef6a4.scope: Deactivated successfully.
Oct 11 04:37:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1322b1ccb772eadc64b6ed119670472a133fd0732eeff47018f5593c43c308c/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1322b1ccb772eadc64b6ed119670472a133fd0732eeff47018f5593c43c308c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:08 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf.
Oct 11 04:37:08 compute-0 podman[161774]: 2025-10-11 04:37:08.231525206 +0000 UTC m=+0.180375800 container init 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: + sudo -E kolla_set_configs
Oct 11 04:37:08 compute-0 podman[161774]: 2025-10-11 04:37:08.260562003 +0000 UTC m=+0.209412627 container start 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 11 04:37:08 compute-0 podman[161800]: 2025-10-11 04:37:08.278656979 +0000 UTC m=+0.084464144 container create 9993fe38165a7091683d9cc3340748ceccbd61fd72294732bf326ce0a28cfc75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cerf, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:37:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:08 compute-0 edpm-start-podman-container[161774]: ovn_metadata_agent
Oct 11 04:37:08 compute-0 podman[161800]: 2025-10-11 04:37:08.246428714 +0000 UTC m=+0.052235889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:37:08 compute-0 podman[161812]: 2025-10-11 04:37:08.353326221 +0000 UTC m=+0.080412364 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 04:37:08 compute-0 systemd[1]: Started libpod-conmon-9993fe38165a7091683d9cc3340748ceccbd61fd72294732bf326ce0a28cfc75.scope.
Oct 11 04:37:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d78352c7f93ec7e44a9a20b44897dc7bf757c38e72607ef5c5bb74dbd3fdb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d78352c7f93ec7e44a9a20b44897dc7bf757c38e72607ef5c5bb74dbd3fdb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d78352c7f93ec7e44a9a20b44897dc7bf757c38e72607ef5c5bb74dbd3fdb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d78352c7f93ec7e44a9a20b44897dc7bf757c38e72607ef5c5bb74dbd3fdb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d78352c7f93ec7e44a9a20b44897dc7bf757c38e72607ef5c5bb74dbd3fdb9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:08 compute-0 podman[161800]: 2025-10-11 04:37:08.477240988 +0000 UTC m=+0.283048143 container init 9993fe38165a7091683d9cc3340748ceccbd61fd72294732bf326ce0a28cfc75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cerf, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 04:37:08 compute-0 podman[161800]: 2025-10-11 04:37:08.490595157 +0000 UTC m=+0.296402302 container start 9993fe38165a7091683d9cc3340748ceccbd61fd72294732bf326ce0a28cfc75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cerf, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 04:37:08 compute-0 podman[161800]: 2025-10-11 04:37:08.583732735 +0000 UTC m=+0.389539950 container attach 9993fe38165a7091683d9cc3340748ceccbd61fd72294732bf326ce0a28cfc75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:37:08 compute-0 edpm-start-podman-container[161773]: Creating additional drop-in dependency for "ovn_metadata_agent" (7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf)
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: INFO:__main__:Validating config file
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: INFO:__main__:Copying service configuration files
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: INFO:__main__:Writing out command to execute
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: ++ cat /run_command
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: + CMD=neutron-ovn-metadata-agent
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: + ARGS=
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: + sudo kolla_copy_cacerts
Oct 11 04:37:08 compute-0 systemd[1]: Reloading.
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: + [[ ! -n '' ]]
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: + . kolla_extend_start
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: Running command: 'neutron-ovn-metadata-agent'
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: + umask 0022
Oct 11 04:37:08 compute-0 ovn_metadata_agent[161792]: + exec neutron-ovn-metadata-agent
Oct 11 04:37:08 compute-0 systemd-rc-local-generator[161891]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:37:08 compute-0 systemd-sysv-generator[161896]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:37:09 compute-0 systemd[1]: Started ovn_metadata_agent container.
Oct 11 04:37:09 compute-0 sudo[161654]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:09 compute-0 ceph-mon[74243]: pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:09 compute-0 sshd-session[152989]: Connection closed by 192.168.122.30 port 47124
Oct 11 04:37:09 compute-0 sshd-session[152986]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:37:09 compute-0 systemd-logind[801]: Session 49 logged out. Waiting for processes to exit.
Oct 11 04:37:09 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Oct 11 04:37:09 compute-0 systemd[1]: session-49.scope: Consumed 58.063s CPU time.
Oct 11 04:37:09 compute-0 systemd-logind[801]: Removed session 49.
Oct 11 04:37:09 compute-0 clever_cerf[161854]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:37:09 compute-0 clever_cerf[161854]: --> relative data size: 1.0
Oct 11 04:37:09 compute-0 clever_cerf[161854]: --> All data devices are unavailable
Oct 11 04:37:09 compute-0 systemd[1]: libpod-9993fe38165a7091683d9cc3340748ceccbd61fd72294732bf326ce0a28cfc75.scope: Deactivated successfully.
Oct 11 04:37:09 compute-0 systemd[1]: libpod-9993fe38165a7091683d9cc3340748ceccbd61fd72294732bf326ce0a28cfc75.scope: Consumed 1.089s CPU time.
Oct 11 04:37:09 compute-0 conmon[161854]: conmon 9993fe38165a7091683d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9993fe38165a7091683d9cc3340748ceccbd61fd72294732bf326ce0a28cfc75.scope/container/memory.events
Oct 11 04:37:09 compute-0 podman[161952]: 2025-10-11 04:37:09.718567089 +0000 UTC m=+0.048460986 container died 9993fe38165a7091683d9cc3340748ceccbd61fd72294732bf326ce0a28cfc75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:37:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2d78352c7f93ec7e44a9a20b44897dc7bf757c38e72607ef5c5bb74dbd3fdb9-merged.mount: Deactivated successfully.
Oct 11 04:37:09 compute-0 podman[161952]: 2025-10-11 04:37:09.791045627 +0000 UTC m=+0.120939484 container remove 9993fe38165a7091683d9cc3340748ceccbd61fd72294732bf326ce0a28cfc75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:37:09 compute-0 systemd[1]: libpod-conmon-9993fe38165a7091683d9cc3340748ceccbd61fd72294732bf326ce0a28cfc75.scope: Deactivated successfully.
Oct 11 04:37:09 compute-0 sudo[161579]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:09 compute-0 sudo[161965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:37:09 compute-0 sudo[161965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:09 compute-0 sudo[161965]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:10 compute-0 sudo[161991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:37:10 compute-0 sudo[161991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:10 compute-0 sudo[161991]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:10 compute-0 sudo[162016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:37:10 compute-0 sudo[162016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:10 compute-0 sudo[162016]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:10 compute-0 sudo[162041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:37:10 compute-0 sudo[162041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:10 compute-0 podman[162105]: 2025-10-11 04:37:10.643837643 +0000 UTC m=+0.043911744 container create c1d65cd66945eb9838f92a22437805cab4e1dac99140973033ead81af0df6548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:37:10 compute-0 systemd[1]: Started libpod-conmon-c1d65cd66945eb9838f92a22437805cab4e1dac99140973033ead81af0df6548.scope.
Oct 11 04:37:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:37:10 compute-0 podman[162105]: 2025-10-11 04:37:10.717214073 +0000 UTC m=+0.117288194 container init c1d65cd66945eb9838f92a22437805cab4e1dac99140973033ead81af0df6548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bohr, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:37:10 compute-0 podman[162105]: 2025-10-11 04:37:10.626395743 +0000 UTC m=+0.026469854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:37:10 compute-0 podman[162105]: 2025-10-11 04:37:10.731625839 +0000 UTC m=+0.131699940 container start c1d65cd66945eb9838f92a22437805cab4e1dac99140973033ead81af0df6548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:37:10 compute-0 podman[162105]: 2025-10-11 04:37:10.735155166 +0000 UTC m=+0.135229327 container attach c1d65cd66945eb9838f92a22437805cab4e1dac99140973033ead81af0df6548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:37:10 compute-0 thirsty_bohr[162122]: 167 167
Oct 11 04:37:10 compute-0 podman[162105]: 2025-10-11 04:37:10.736817137 +0000 UTC m=+0.136891238 container died c1d65cd66945eb9838f92a22437805cab4e1dac99140973033ead81af0df6548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:37:10 compute-0 systemd[1]: libpod-c1d65cd66945eb9838f92a22437805cab4e1dac99140973033ead81af0df6548.scope: Deactivated successfully.
Oct 11 04:37:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-f90fbea391f105e821ad45297bd4c07562b14b9b5e413d154b8b1473173b5c70-merged.mount: Deactivated successfully.
Oct 11 04:37:10 compute-0 podman[162105]: 2025-10-11 04:37:10.776417113 +0000 UTC m=+0.176491224 container remove c1d65cd66945eb9838f92a22437805cab4e1dac99140973033ead81af0df6548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:37:10 compute-0 systemd[1]: libpod-conmon-c1d65cd66945eb9838f92a22437805cab4e1dac99140973033ead81af0df6548.scope: Deactivated successfully.
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.963 161813 INFO neutron.common.config [-] Logging enabled!
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.964 161813 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.964 161813 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.964 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.965 161813 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.965 161813 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.965 161813 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.965 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.965 161813 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.965 161813 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.965 161813 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.966 161813 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.966 161813 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.966 161813 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.966 161813 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.966 161813 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.966 161813 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.966 161813 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.966 161813 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.966 161813 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.966 161813 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.967 161813 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.967 161813 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.967 161813 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.967 161813 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.967 161813 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.967 161813 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.967 161813 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.967 161813 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.967 161813 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.967 161813 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.968 161813 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.968 161813 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.968 161813 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.968 161813 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.968 161813 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.968 161813 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.968 161813 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.968 161813 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.968 161813 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.969 161813 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.969 161813 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.969 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.969 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.969 161813 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.969 161813 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.969 161813 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.969 161813 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.969 161813 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.969 161813 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.970 161813 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.970 161813 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.970 161813 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.970 161813 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.970 161813 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.970 161813 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.970 161813 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.970 161813 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.970 161813 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.970 161813 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.971 161813 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.971 161813 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.971 161813 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.971 161813 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.971 161813 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.971 161813 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.971 161813 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.971 161813 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.971 161813 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.972 161813 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.972 161813 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.972 161813 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.972 161813 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.972 161813 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.972 161813 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.972 161813 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.972 161813 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.972 161813 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.972 161813 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.973 161813 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.973 161813 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.973 161813 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.973 161813 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.973 161813 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.973 161813 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.973 161813 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.973 161813 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.973 161813 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.974 161813 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.974 161813 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.974 161813 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.974 161813 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.974 161813 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.974 161813 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.974 161813 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.974 161813 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.974 161813 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.974 161813 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.974 161813 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.975 161813 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.975 161813 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.975 161813 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.975 161813 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.975 161813 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.975 161813 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.975 161813 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.975 161813 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.975 161813 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.975 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.976 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.976 161813 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.976 161813 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.976 161813 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.976 161813 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.976 161813 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.976 161813 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.976 161813 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.976 161813 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.977 161813 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.977 161813 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.977 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.977 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.977 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.977 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.977 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.977 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.977 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.978 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.978 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.978 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.978 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.978 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.978 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.978 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.978 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.978 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.979 161813 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.979 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.979 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.979 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.979 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.979 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.979 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.979 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.979 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.979 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.980 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.980 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.980 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.980 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.980 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.980 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.980 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.980 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.980 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.980 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.981 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.981 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.981 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.981 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.981 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.981 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.981 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.981 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.981 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.982 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.982 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.982 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.982 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.982 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.982 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.982 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.982 161813 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.982 161813 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.982 161813 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.983 161813 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.983 161813 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.983 161813 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.983 161813 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.983 161813 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.983 161813 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.983 161813 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.983 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.983 161813 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.984 161813 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.984 161813 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.984 161813 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.984 161813 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.984 161813 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.984 161813 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.984 161813 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.984 161813 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.984 161813 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.985 161813 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.985 161813 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.985 161813 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.985 161813 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.985 161813 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.985 161813 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.985 161813 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.985 161813 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.985 161813 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.985 161813 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.986 161813 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.986 161813 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.986 161813 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.986 161813 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.986 161813 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.986 161813 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.986 161813 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.986 161813 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.986 161813 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.986 161813 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.987 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.987 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.987 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.987 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.987 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.987 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.987 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.987 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.987 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.987 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.988 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.988 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.988 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.988 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.988 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.988 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.988 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.988 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.988 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.989 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.989 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.989 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.989 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.989 161813 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.989 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.989 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.989 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.989 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.989 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.990 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.990 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.990 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.990 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.990 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.990 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.990 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.990 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.990 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.990 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.991 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.991 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.991 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.991 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.991 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.991 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.991 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.991 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.991 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.992 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.992 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.992 161813 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.992 161813 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.992 161813 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.992 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.992 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.992 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.992 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.992 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.993 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.993 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.993 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.993 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.993 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.993 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.993 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.994 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.994 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.994 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.994 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.994 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.994 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.994 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.994 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.994 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.994 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.995 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.995 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.995 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.995 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.995 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.995 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.995 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.995 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.995 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.996 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.996 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.996 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.996 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.996 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.996 161813 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:10.996 161813 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.005 161813 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.005 161813 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.005 161813 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.005 161813 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.005 161813 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Oct 11 04:37:11 compute-0 podman[162145]: 2025-10-11 04:37:11.012247831 +0000 UTC m=+0.058054793 container create eceedc3d76c81ae6068654a697e7b41bf0033327ecdf9262ae03150bf9266ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shtern, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.018 161813 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 2ff6420e-86e1-487c-bef9-adac80b75ae0 (UUID: 2ff6420e-86e1-487c-bef9-adac80b75ae0) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.036 161813 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.037 161813 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.037 161813 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.037 161813 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.040 161813 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.045 161813 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.051 161813 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '2ff6420e-86e1-487c-bef9-adac80b75ae0'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7efd8e674fd0>], external_ids={}, name=2ff6420e-86e1-487c-bef9-adac80b75ae0, nb_cfg_timestamp=1760157368032, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.052 161813 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7efd8e5f6e20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.053 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.053 161813 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.053 161813 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.053 161813 INFO oslo_service.service [-] Starting 1 workers
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.057 161813 DEBUG oslo_service.service [-] Started child 162161 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.060 161813 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpaqadygdj/privsep.sock']
Oct 11 04:37:11 compute-0 systemd[1]: Started libpod-conmon-eceedc3d76c81ae6068654a697e7b41bf0033327ecdf9262ae03150bf9266ef8.scope.
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.064 162161 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-892404'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Oct 11 04:37:11 compute-0 podman[162145]: 2025-10-11 04:37:10.98180125 +0000 UTC m=+0.027608252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.098 162161 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 11 04:37:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.098 162161 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.099 162161 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 04:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef7464bb46af275a0ca8bb60a00e66847ef3642d42b05f35b357f245552df51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.102 162161 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 11 04:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef7464bb46af275a0ca8bb60a00e66847ef3642d42b05f35b357f245552df51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef7464bb46af275a0ca8bb60a00e66847ef3642d42b05f35b357f245552df51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef7464bb46af275a0ca8bb60a00e66847ef3642d42b05f35b357f245552df51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.109 162161 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.118 162161 INFO eventlet.wsgi.server [-] (162161) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Oct 11 04:37:11 compute-0 podman[162145]: 2025-10-11 04:37:11.121773973 +0000 UTC m=+0.167580965 container init eceedc3d76c81ae6068654a697e7b41bf0033327ecdf9262ae03150bf9266ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:37:11 compute-0 podman[162145]: 2025-10-11 04:37:11.129032902 +0000 UTC m=+0.174839864 container start eceedc3d76c81ae6068654a697e7b41bf0033327ecdf9262ae03150bf9266ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shtern, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:37:11 compute-0 podman[162145]: 2025-10-11 04:37:11.132452466 +0000 UTC m=+0.178259428 container attach eceedc3d76c81ae6068654a697e7b41bf0033327ecdf9262ae03150bf9266ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shtern, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:37:11 compute-0 ceph-mon[74243]: pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:11 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.687 161813 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.688 161813 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpaqadygdj/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.577 162171 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.582 162171 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.586 162171 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.586 162171 INFO oslo.privsep.daemon [-] privsep daemon running as pid 162171
Oct 11 04:37:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:11.692 162171 DEBUG oslo.privsep.daemon [-] privsep: reply[b3160a01-f495-45db-83e8-d467ce78bf0e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:37:11 compute-0 jolly_shtern[162163]: {
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:     "0": [
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:         {
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "devices": [
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "/dev/loop3"
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             ],
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "lv_name": "ceph_lv0",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "lv_size": "21470642176",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "name": "ceph_lv0",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "tags": {
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.cluster_name": "ceph",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.crush_device_class": "",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.encrypted": "0",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.osd_id": "0",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.type": "block",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.vdo": "0"
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             },
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "type": "block",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "vg_name": "ceph_vg0"
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:         }
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:     ],
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:     "1": [
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:         {
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "devices": [
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "/dev/loop4"
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             ],
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "lv_name": "ceph_lv1",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "lv_size": "21470642176",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "name": "ceph_lv1",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "tags": {
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.cluster_name": "ceph",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.crush_device_class": "",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.encrypted": "0",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.osd_id": "1",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.type": "block",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.vdo": "0"
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             },
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "type": "block",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "vg_name": "ceph_vg1"
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:         }
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:     ],
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:     "2": [
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:         {
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "devices": [
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "/dev/loop5"
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             ],
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "lv_name": "ceph_lv2",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "lv_size": "21470642176",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "name": "ceph_lv2",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "tags": {
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.cluster_name": "ceph",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.crush_device_class": "",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.encrypted": "0",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.osd_id": "2",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.type": "block",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:                 "ceph.vdo": "0"
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             },
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "type": "block",
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:             "vg_name": "ceph_vg2"
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:         }
Oct 11 04:37:11 compute-0 jolly_shtern[162163]:     ]
Oct 11 04:37:11 compute-0 jolly_shtern[162163]: }
Oct 11 04:37:11 compute-0 systemd[1]: libpod-eceedc3d76c81ae6068654a697e7b41bf0033327ecdf9262ae03150bf9266ef8.scope: Deactivated successfully.
Oct 11 04:37:12 compute-0 podman[162180]: 2025-10-11 04:37:12.000030668 +0000 UTC m=+0.043941595 container died eceedc3d76c81ae6068654a697e7b41bf0033327ecdf9262ae03150bf9266ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shtern, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 04:37:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-fef7464bb46af275a0ca8bb60a00e66847ef3642d42b05f35b357f245552df51-merged.mount: Deactivated successfully.
Oct 11 04:37:12 compute-0 podman[162180]: 2025-10-11 04:37:12.075671574 +0000 UTC m=+0.119582441 container remove eceedc3d76c81ae6068654a697e7b41bf0033327ecdf9262ae03150bf9266ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shtern, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:37:12 compute-0 systemd[1]: libpod-conmon-eceedc3d76c81ae6068654a697e7b41bf0033327ecdf9262ae03150bf9266ef8.scope: Deactivated successfully.
Oct 11 04:37:12 compute-0 sudo[162041]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.186 162171 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.186 162171 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.186 162171 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:37:12 compute-0 sudo[162195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:37:12 compute-0 sudo[162195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:12 compute-0 sudo[162195]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:12 compute-0 sudo[162220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:37:12 compute-0 sudo[162220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:12 compute-0 sudo[162220]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:12 compute-0 sudo[162245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:37:12 compute-0 sudo[162245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:12 compute-0 sudo[162245]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:12 compute-0 sudo[162270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:37:12 compute-0 sudo[162270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.715 162171 DEBUG oslo.privsep.daemon [-] privsep: reply[551eea54-0736-49f5-9132-f53cabb4268a]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.718 161813 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=2ff6420e-86e1-487c-bef9-adac80b75ae0, column=external_ids, values=({'neutron:ovn-metadata-id': '3eded32d-ce0f-583c-ae17-227e6b958ddc'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.732 161813 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2ff6420e-86e1-487c-bef9-adac80b75ae0, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.739 161813 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.740 161813 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.740 161813 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.740 161813 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.740 161813 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.740 161813 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.741 161813 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.741 161813 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.741 161813 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.741 161813 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.742 161813 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.742 161813 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.742 161813 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.742 161813 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.743 161813 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.743 161813 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.743 161813 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.744 161813 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.744 161813 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.744 161813 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.744 161813 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.745 161813 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.745 161813 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.746 161813 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.746 161813 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.746 161813 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.746 161813 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.747 161813 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.747 161813 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.747 161813 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.747 161813 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.748 161813 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.748 161813 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.748 161813 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.749 161813 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.749 161813 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.749 161813 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.750 161813 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.750 161813 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.750 161813 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.750 161813 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.750 161813 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.751 161813 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.751 161813 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.751 161813 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.751 161813 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.752 161813 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.752 161813 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.752 161813 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.752 161813 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.752 161813 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.753 161813 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.753 161813 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.753 161813 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.753 161813 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.753 161813 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.754 161813 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.754 161813 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.754 161813 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.754 161813 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.755 161813 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.755 161813 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.755 161813 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.755 161813 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.755 161813 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.756 161813 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.756 161813 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.756 161813 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.756 161813 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.757 161813 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.757 161813 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.757 161813 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.757 161813 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.757 161813 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.758 161813 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.758 161813 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.758 161813 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.758 161813 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.759 161813 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.759 161813 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.759 161813 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.759 161813 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.759 161813 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.760 161813 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.760 161813 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.760 161813 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.760 161813 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.761 161813 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.761 161813 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.761 161813 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.761 161813 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.761 161813 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.762 161813 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.762 161813 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.762 161813 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.762 161813 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.762 161813 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.763 161813 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.763 161813 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.763 161813 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.763 161813 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.763 161813 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.764 161813 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.764 161813 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.765 161813 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.765 161813 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.765 161813 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.765 161813 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.766 161813 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.766 161813 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.766 161813 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.766 161813 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.767 161813 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.767 161813 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.767 161813 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.767 161813 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.768 161813 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.768 161813 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.768 161813 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.768 161813 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.769 161813 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.769 161813 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.769 161813 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.769 161813 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.769 161813 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.770 161813 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.770 161813 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.770 161813 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.770 161813 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.771 161813 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.771 161813 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.771 161813 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.771 161813 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.772 161813 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.772 161813 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.772 161813 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.772 161813 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.773 161813 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.773 161813 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.773 161813 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.773 161813 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.773 161813 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.774 161813 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.774 161813 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.774 161813 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.774 161813 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.774 161813 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.775 161813 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.775 161813 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.775 161813 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.775 161813 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.776 161813 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.776 161813 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.776 161813 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.776 161813 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.776 161813 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.777 161813 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.777 161813 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.777 161813 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.777 161813 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.778 161813 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.778 161813 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.778 161813 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.778 161813 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.778 161813 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.779 161813 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.779 161813 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.779 161813 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.779 161813 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.779 161813 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.780 161813 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.780 161813 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.780 161813 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.780 161813 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.781 161813 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.781 161813 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.781 161813 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.781 161813 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.782 161813 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.782 161813 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.782 161813 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.782 161813 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.782 161813 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.783 161813 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.783 161813 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.783 161813 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.783 161813 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.784 161813 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.784 161813 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.784 161813 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.784 161813 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.784 161813 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.784 161813 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.785 161813 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.785 161813 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.785 161813 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.785 161813 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.785 161813 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.785 161813 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.785 161813 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.786 161813 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.786 161813 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.786 161813 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.786 161813 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.786 161813 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.786 161813 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.786 161813 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.787 161813 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.787 161813 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.787 161813 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.787 161813 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.787 161813 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.787 161813 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.787 161813 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.788 161813 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.788 161813 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.788 161813 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.788 161813 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.788 161813 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.788 161813 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.788 161813 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.788 161813 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.789 161813 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.789 161813 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.789 161813 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.789 161813 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.789 161813 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.789 161813 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.789 161813 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.790 161813 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.790 161813 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.790 161813 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.790 161813 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.790 161813 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.790 161813 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.790 161813 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.791 161813 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.791 161813 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.791 161813 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.791 161813 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.791 161813 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.791 161813 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.791 161813 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.792 161813 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.792 161813 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.792 161813 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.792 161813 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.792 161813 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.792 161813 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.792 161813 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.793 161813 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.793 161813 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.793 161813 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.793 161813 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.793 161813 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.793 161813 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.793 161813 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.794 161813 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.794 161813 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.794 161813 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.794 161813 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.794 161813 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.794 161813 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.794 161813 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.795 161813 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.795 161813 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.795 161813 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.795 161813 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.795 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.795 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.796 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.796 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.796 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.796 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.796 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.796 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.796 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.797 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.797 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.797 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.797 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.797 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.798 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.798 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.798 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.798 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.798 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.798 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.798 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.799 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.799 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.799 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.799 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.799 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.800 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.800 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.800 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.800 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.800 161813 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.800 161813 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.800 161813 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.801 161813 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.801 161813 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:37:12 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:37:12.801 161813 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 11 04:37:12 compute-0 podman[162336]: 2025-10-11 04:37:12.916883645 +0000 UTC m=+0.041203688 container create 7937f9f8f0292253b4e1ccf24fb0e8eea8aa018bf5872f4412d59e3bc78fe538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:37:12 compute-0 systemd[1]: Started libpod-conmon-7937f9f8f0292253b4e1ccf24fb0e8eea8aa018bf5872f4412d59e3bc78fe538.scope.
Oct 11 04:37:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:37:12 compute-0 podman[162336]: 2025-10-11 04:37:12.991193088 +0000 UTC m=+0.115513131 container init 7937f9f8f0292253b4e1ccf24fb0e8eea8aa018bf5872f4412d59e3bc78fe538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:37:12 compute-0 podman[162336]: 2025-10-11 04:37:12.89721859 +0000 UTC m=+0.021538663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:37:12 compute-0 podman[162336]: 2025-10-11 04:37:12.997246347 +0000 UTC m=+0.121566400 container start 7937f9f8f0292253b4e1ccf24fb0e8eea8aa018bf5872f4412d59e3bc78fe538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:37:13 compute-0 podman[162336]: 2025-10-11 04:37:13.000425896 +0000 UTC m=+0.124745949 container attach 7937f9f8f0292253b4e1ccf24fb0e8eea8aa018bf5872f4412d59e3bc78fe538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pascal, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct 11 04:37:13 compute-0 zealous_pascal[162352]: 167 167
Oct 11 04:37:13 compute-0 systemd[1]: libpod-7937f9f8f0292253b4e1ccf24fb0e8eea8aa018bf5872f4412d59e3bc78fe538.scope: Deactivated successfully.
Oct 11 04:37:13 compute-0 podman[162336]: 2025-10-11 04:37:13.003486281 +0000 UTC m=+0.127806354 container died 7937f9f8f0292253b4e1ccf24fb0e8eea8aa018bf5872f4412d59e3bc78fe538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:37:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c626be93ca11d97d4d0b5046b560f9ad1ca136287835e68dfbc66f4c06e4fff-merged.mount: Deactivated successfully.
Oct 11 04:37:13 compute-0 podman[162336]: 2025-10-11 04:37:13.038217298 +0000 UTC m=+0.162537341 container remove 7937f9f8f0292253b4e1ccf24fb0e8eea8aa018bf5872f4412d59e3bc78fe538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:37:13 compute-0 systemd[1]: libpod-conmon-7937f9f8f0292253b4e1ccf24fb0e8eea8aa018bf5872f4412d59e3bc78fe538.scope: Deactivated successfully.
Oct 11 04:37:13 compute-0 podman[162376]: 2025-10-11 04:37:13.254514334 +0000 UTC m=+0.060476623 container create d1d0c760bc39568ad37b9cf27eb27539f622378b65e92c1f5fbb46c186f4d9a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 04:37:13 compute-0 systemd[1]: Started libpod-conmon-d1d0c760bc39568ad37b9cf27eb27539f622378b65e92c1f5fbb46c186f4d9a4.scope.
Oct 11 04:37:13 compute-0 podman[162376]: 2025-10-11 04:37:13.230888681 +0000 UTC m=+0.036851040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:37:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd38ae35b2a17983f7355c03dbe3adfe54100735c9e1ab46a21beb3b9b1813f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd38ae35b2a17983f7355c03dbe3adfe54100735c9e1ab46a21beb3b9b1813f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd38ae35b2a17983f7355c03dbe3adfe54100735c9e1ab46a21beb3b9b1813f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd38ae35b2a17983f7355c03dbe3adfe54100735c9e1ab46a21beb3b9b1813f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:13 compute-0 podman[162376]: 2025-10-11 04:37:13.351274831 +0000 UTC m=+0.157237190 container init d1d0c760bc39568ad37b9cf27eb27539f622378b65e92c1f5fbb46c186f4d9a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:37:13 compute-0 podman[162376]: 2025-10-11 04:37:13.36301702 +0000 UTC m=+0.168979319 container start d1d0c760bc39568ad37b9cf27eb27539f622378b65e92c1f5fbb46c186f4d9a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:37:13 compute-0 podman[162376]: 2025-10-11 04:37:13.367578723 +0000 UTC m=+0.173541032 container attach d1d0c760bc39568ad37b9cf27eb27539f622378b65e92c1f5fbb46c186f4d9a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:37:13 compute-0 ceph-mon[74243]: pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]: {
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:         "osd_id": 1,
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:         "type": "bluestore"
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:     },
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:         "osd_id": 0,
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:         "type": "bluestore"
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:     },
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:         "osd_id": 2,
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:         "type": "bluestore"
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]:     }
Oct 11 04:37:14 compute-0 flamboyant_volhard[162393]: }
Oct 11 04:37:14 compute-0 systemd[1]: libpod-d1d0c760bc39568ad37b9cf27eb27539f622378b65e92c1f5fbb46c186f4d9a4.scope: Deactivated successfully.
Oct 11 04:37:14 compute-0 systemd[1]: libpod-d1d0c760bc39568ad37b9cf27eb27539f622378b65e92c1f5fbb46c186f4d9a4.scope: Consumed 1.193s CPU time.
Oct 11 04:37:14 compute-0 podman[162376]: 2025-10-11 04:37:14.559134705 +0000 UTC m=+1.365096984 container died d1d0c760bc39568ad37b9cf27eb27539f622378b65e92c1f5fbb46c186f4d9a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:37:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd38ae35b2a17983f7355c03dbe3adfe54100735c9e1ab46a21beb3b9b1813f2-merged.mount: Deactivated successfully.
Oct 11 04:37:14 compute-0 podman[162376]: 2025-10-11 04:37:14.621631187 +0000 UTC m=+1.427593466 container remove d1d0c760bc39568ad37b9cf27eb27539f622378b65e92c1f5fbb46c186f4d9a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:37:14 compute-0 systemd[1]: libpod-conmon-d1d0c760bc39568ad37b9cf27eb27539f622378b65e92c1f5fbb46c186f4d9a4.scope: Deactivated successfully.
Oct 11 04:37:14 compute-0 sudo[162270]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:37:14 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:37:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:37:14 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:37:14 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev d3854245-0e33-47e6-83ce-e1d821ea7436 does not exist
Oct 11 04:37:14 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev fc2e3c22-e3e2-4c00-89f5-5e5d43bef3dd does not exist
Oct 11 04:37:14 compute-0 sudo[162437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:37:14 compute-0 sudo[162437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:14 compute-0 sudo[162437]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:14 compute-0 sudo[162462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:37:14 compute-0 sudo[162462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:37:14 compute-0 sudo[162462]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:14 compute-0 sshd-session[162483]: Accepted publickey for zuul from 192.168.122.30 port 60728 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:37:14 compute-0 systemd-logind[801]: New session 50 of user zuul.
Oct 11 04:37:14 compute-0 systemd[1]: Started Session 50 of User zuul.
Oct 11 04:37:14 compute-0 sshd-session[162483]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:37:15 compute-0 ceph-mon[74243]: pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:15 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:37:15 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:37:16 compute-0 python3.9[162640]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:37:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:17 compute-0 sudo[162794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atotiqozvcktemcddxkneqqqbgsaojbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157436.7772193-34-161538487537369/AnsiballZ_command.py'
Oct 11 04:37:17 compute-0 sudo[162794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:17 compute-0 python3.9[162796]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:37:17 compute-0 ceph-mon[74243]: pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:17 compute-0 sudo[162794]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:18 compute-0 sudo[162959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncdlvibmzbzkfacnypekqpbepyjlclgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157437.9332964-45-173684450234650/AnsiballZ_systemd_service.py'
Oct 11 04:37:18 compute-0 sudo[162959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:19 compute-0 python3.9[162961]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 04:37:19 compute-0 systemd[1]: Reloading.
Oct 11 04:37:19 compute-0 systemd-rc-local-generator[162988]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:37:19 compute-0 systemd-sysv-generator[162991]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:37:19 compute-0 sudo[162959]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:19 compute-0 ceph-mon[74243]: pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:20 compute-0 python3.9[163147]: ansible-ansible.builtin.service_facts Invoked
Oct 11 04:37:20 compute-0 network[163164]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 04:37:20 compute-0 network[163165]: 'network-scripts' will be removed from distribution in near future.
Oct 11 04:37:20 compute-0 network[163166]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 04:37:21 compute-0 ceph-mon[74243]: pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:23 compute-0 ceph-mon[74243]: pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:25 compute-0 ceph-mon[74243]: pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:37:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:37:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:37:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:37:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:37:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:37:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:27 compute-0 ceph-mon[74243]: pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:28 compute-0 sudo[163429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibqjzzguuarupqjznuezujfwyzsnveeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157448.4965212-64-74254487911351/AnsiballZ_systemd_service.py'
Oct 11 04:37:28 compute-0 sudo[163429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:29 compute-0 python3.9[163431]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:37:29 compute-0 sudo[163429]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:29 compute-0 ceph-mon[74243]: pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:29 compute-0 sudo[163582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbplhsoypjgucpazlyyxxmodcnvggkks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157449.504984-64-60780162314809/AnsiballZ_systemd_service.py'
Oct 11 04:37:29 compute-0 sudo[163582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:30 compute-0 python3.9[163584]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:37:30 compute-0 sudo[163582]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:30 compute-0 ceph-mon[74243]: pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:30 compute-0 sudo[163735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nstwokoyytlnnygyrcppgscfgvvhupfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157450.4328015-64-174692289958681/AnsiballZ_systemd_service.py'
Oct 11 04:37:30 compute-0 sudo[163735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:31 compute-0 python3.9[163737]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:37:31 compute-0 sudo[163735]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:31 compute-0 sudo[163888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjzwxniazdoocfbrofnylfqnanmvaadm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157451.419817-64-153771246788295/AnsiballZ_systemd_service.py'
Oct 11 04:37:31 compute-0 sudo[163888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:32 compute-0 python3.9[163890]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:37:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:32 compute-0 sudo[163888]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:32 compute-0 sudo[164054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubugtsxynxhecjudvtkrrlwgpmvepooc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157452.4940925-64-126177206947119/AnsiballZ_systemd_service.py'
Oct 11 04:37:32 compute-0 sudo[164054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:33 compute-0 podman[164015]: 2025-10-11 04:37:33.035926545 +0000 UTC m=+0.182987430 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 11 04:37:33 compute-0 python3.9[164062]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:37:33 compute-0 sudo[164054]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:33 compute-0 ceph-mon[74243]: pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:33 compute-0 sudo[164221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlgcpvmgarlotwdedyicjfkkuquyzdng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157453.4571865-64-116848212596532/AnsiballZ_systemd_service.py'
Oct 11 04:37:33 compute-0 sudo[164221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:34 compute-0 python3.9[164223]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:37:34 compute-0 sudo[164221]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:34 compute-0 sudo[164374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpquuenhuzhngxumdnawcvpllmauawra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157454.4072561-64-79293418032702/AnsiballZ_systemd_service.py'
Oct 11 04:37:34 compute-0 sudo[164374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:35 compute-0 python3.9[164376]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:37:35 compute-0 sudo[164374]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:35 compute-0 ceph-mon[74243]: pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:36 compute-0 sudo[164527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsxwiepzbzseovvnilwswisqkxsxlwof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157455.567099-116-178476601199383/AnsiballZ_file.py'
Oct 11 04:37:36 compute-0 sudo[164527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:36 compute-0 python3.9[164529]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:37:36 compute-0 sudo[164527]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:36 compute-0 sudo[164679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpndyeeedxqfdwdniltzyllpwwxxxsky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157456.614862-116-188249864942906/AnsiballZ_file.py'
Oct 11 04:37:36 compute-0 sudo[164679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:37 compute-0 python3.9[164681]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:37:37 compute-0 sudo[164679]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:37 compute-0 ceph-mon[74243]: pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:37 compute-0 sudo[164831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riuhzcvuqxckzdqezgnowryvaqopkhib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157457.2907863-116-229098927281146/AnsiballZ_file.py'
Oct 11 04:37:37 compute-0 sudo[164831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:37 compute-0 python3.9[164833]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:37:37 compute-0 sudo[164831]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:38 compute-0 sudo[164994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezwsowblwyodgnpmjklorvxgwoldaxea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157458.0482423-116-27142452396688/AnsiballZ_file.py'
Oct 11 04:37:38 compute-0 sudo[164994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:38 compute-0 podman[164957]: 2025-10-11 04:37:38.555971479 +0000 UTC m=+0.070445781 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 04:37:38 compute-0 python3.9[165000]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:37:38 compute-0 sudo[164994]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:39 compute-0 sudo[165155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqxsaydnpdmpmcrqlmaenspxwgfqcfao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157458.9007537-116-196562623761002/AnsiballZ_file.py'
Oct 11 04:37:39 compute-0 sudo[165155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:39 compute-0 ceph-mon[74243]: pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:39 compute-0 python3.9[165157]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:37:39 compute-0 sudo[165155]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:40 compute-0 sudo[165307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvwbmsfldcfvnczsydzsznhwtiejovti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157459.662848-116-232760661603206/AnsiballZ_file.py'
Oct 11 04:37:40 compute-0 sudo[165307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:40 compute-0 python3.9[165309]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:37:40 compute-0 sudo[165307]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:40 compute-0 sudo[165459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agrkbnktqttfqomdkuexuiewitpsatfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157460.4292798-116-63385419120990/AnsiballZ_file.py'
Oct 11 04:37:40 compute-0 sudo[165459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:40 compute-0 python3.9[165461]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:37:40 compute-0 sudo[165459]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:41 compute-0 ceph-mon[74243]: pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:41 compute-0 sudo[165611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knuuiicjzxxnkxshxxgsxdtlnvjitymm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157461.1934872-166-258956948501093/AnsiballZ_file.py'
Oct 11 04:37:41 compute-0 sudo[165611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:41 compute-0 python3.9[165613]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:37:41 compute-0 sudo[165611]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:42 compute-0 sudo[165763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsvpsmjeakdldfjtdgxjqufxpsdvgkna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157461.8628516-166-254305453566482/AnsiballZ_file.py'
Oct 11 04:37:42 compute-0 sudo[165763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:42 compute-0 python3.9[165765]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:37:42 compute-0 sudo[165763]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:42 compute-0 sudo[165915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eubmaugkjwvpuygroljnevwyoiomsswx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157462.5762672-166-227328810447463/AnsiballZ_file.py'
Oct 11 04:37:42 compute-0 sudo[165915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:43 compute-0 python3.9[165917]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:37:43 compute-0 sudo[165915]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:43 compute-0 ceph-mon[74243]: pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:43 compute-0 sudo[166067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plpkyqcwhqfbkmwtelpvugentwdxibvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157463.2288623-166-50481476854633/AnsiballZ_file.py'
Oct 11 04:37:43 compute-0 sudo[166067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:43 compute-0 python3.9[166069]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:37:43 compute-0 sudo[166067]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:44 compute-0 sudo[166219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxvawlngwhcptafvapqnqsywjxkphcjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157463.8876996-166-85686877280629/AnsiballZ_file.py'
Oct 11 04:37:44 compute-0 sudo[166219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:44 compute-0 python3.9[166221]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:37:44 compute-0 sudo[166219]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:44 compute-0 sudo[166371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veeuhatjwktjoiaovawgiuiijjxtvvqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157464.6332285-166-79230961000938/AnsiballZ_file.py'
Oct 11 04:37:44 compute-0 sudo[166371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:45 compute-0 python3.9[166373]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:37:45 compute-0 sudo[166371]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:45 compute-0 ceph-mon[74243]: pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:45 compute-0 sudo[166523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbaaprmmdetumxczssbyectubiuelaie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157465.3514328-166-36696099422901/AnsiballZ_file.py'
Oct 11 04:37:45 compute-0 sudo[166523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:45 compute-0 python3.9[166525]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:37:45 compute-0 sudo[166523]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:46 compute-0 sudo[166675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spmqwjjjdtkvdstzephksglrcypfjuws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157466.1937354-217-261543213561109/AnsiballZ_command.py'
Oct 11 04:37:46 compute-0 sudo[166675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:46 compute-0 python3.9[166677]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:37:46 compute-0 sudo[166675]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:47 compute-0 ceph-mon[74243]: pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:47 compute-0 python3.9[166829]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 04:37:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:48 compute-0 sudo[166979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnskaqynyotxslwzzexaknqlhfiefhmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157468.1420124-235-222805007762634/AnsiballZ_systemd_service.py'
Oct 11 04:37:48 compute-0 sudo[166979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:48 compute-0 python3.9[166981]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 04:37:48 compute-0 systemd[1]: Reloading.
Oct 11 04:37:49 compute-0 systemd-rc-local-generator[167008]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:37:49 compute-0 systemd-sysv-generator[167012]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:37:49 compute-0 sudo[166979]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:49 compute-0 ceph-mon[74243]: pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:49 compute-0 sudo[167166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rocytlyngkaulqgiwzekngoogkqbldii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157469.5275693-243-190290708625551/AnsiballZ_command.py'
Oct 11 04:37:49 compute-0 sudo[167166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:50 compute-0 python3.9[167168]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:37:50 compute-0 sudo[167166]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:50 compute-0 sudo[167319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlawoimguxkdrwffdfanjvvexnkbwzut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157470.3610344-243-188864738426556/AnsiballZ_command.py'
Oct 11 04:37:50 compute-0 sudo[167319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:50 compute-0 python3.9[167321]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:37:51 compute-0 sudo[167319]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:51 compute-0 ceph-mon[74243]: pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:51 compute-0 sudo[167472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfnuuatqihfcspyowyafhfrvxhlqhhje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157471.222218-243-210350064481865/AnsiballZ_command.py'
Oct 11 04:37:51 compute-0 sudo[167472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:51 compute-0 python3.9[167474]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:37:51 compute-0 sudo[167472]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:52 compute-0 sudo[167625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkdxnwqdagovpmwqwmdpotacotdjdjyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157472.0505016-243-241661253554458/AnsiballZ_command.py'
Oct 11 04:37:52 compute-0 sudo[167625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:52 compute-0 python3.9[167627]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:37:52 compute-0 sudo[167625]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:53 compute-0 sudo[167778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwfhkkdvkoishvuxgylnrzhbmzcvdfqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157472.8385928-243-77852174654916/AnsiballZ_command.py'
Oct 11 04:37:53 compute-0 sudo[167778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:53 compute-0 python3.9[167780]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:37:53 compute-0 sudo[167778]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:53 compute-0 ceph-mon[74243]: pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:53 compute-0 sudo[167931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snaqltbzssihjqqiuteabqoiefqawzfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157473.6731675-243-180413256449610/AnsiballZ_command.py'
Oct 11 04:37:53 compute-0 sudo[167931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:54 compute-0 python3.9[167933]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:37:54 compute-0 sudo[167931]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:54 compute-0 sudo[168084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksmtklrouzhwutltjnpbfnsfwyomsoxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157474.3685343-243-231217724290594/AnsiballZ_command.py'
Oct 11 04:37:54 compute-0 sudo[168084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:54 compute-0 python3.9[168086]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:37:54 compute-0 sudo[168084]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:55 compute-0 ceph-mon[74243]: pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:37:56
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', '.rgw.root', 'vms', 'backups', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data']
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:37:56 compute-0 sudo[168237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yknifvqtsgggknszyrwkfiepblomhyuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157475.4801893-297-8125707370872/AnsiballZ_getent.py'
Oct 11 04:37:56 compute-0 sudo[168237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:37:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:56 compute-0 python3.9[168239]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 11 04:37:56 compute-0 sudo[168237]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:57 compute-0 sudo[168390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbyeyqecbevptcivzjjlwdxfixcvzkvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157476.621103-305-136911757932421/AnsiballZ_group.py'
Oct 11 04:37:57 compute-0 sudo[168390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:57 compute-0 python3.9[168392]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 11 04:37:57 compute-0 groupadd[168393]: group added to /etc/group: name=libvirt, GID=42473
Oct 11 04:37:57 compute-0 groupadd[168393]: group added to /etc/gshadow: name=libvirt
Oct 11 04:37:57 compute-0 groupadd[168393]: new group: name=libvirt, GID=42473
Oct 11 04:37:57 compute-0 ceph-mon[74243]: pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:57 compute-0 sudo[168390]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Oct 11 04:37:58 compute-0 sudo[168548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qammjpvxsztujgtsgwlhjtjlkudbxjpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157477.849545-313-192585900324128/AnsiballZ_user.py'
Oct 11 04:37:58 compute-0 sudo[168548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:58 compute-0 python3.9[168550]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 11 04:37:58 compute-0 useradd[168552]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Oct 11 04:37:58 compute-0 sudo[168548]: pam_unix(sudo:session): session closed for user root
Oct 11 04:37:59 compute-0 ceph-mon[74243]: pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Oct 11 04:37:59 compute-0 sudo[168708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmrzcqzlkclhbokbctlcwahmcxhvnhre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157479.263888-324-269302505019528/AnsiballZ_setup.py'
Oct 11 04:37:59 compute-0 sudo[168708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:37:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:59 compute-0 python3.9[168710]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:38:00 compute-0 sudo[168708]: pam_unix(sudo:session): session closed for user root
Oct 11 04:38:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Oct 11 04:38:00 compute-0 ceph-mon[74243]: pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Oct 11 04:38:00 compute-0 sudo[168792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loegnglxsbdtruxjnzjjobdvesscvhso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157479.263888-324-269302505019528/AnsiballZ_dnf.py'
Oct 11 04:38:00 compute-0 sudo[168792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:38:01 compute-0 python3.9[168794]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:38:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:38:03 compute-0 ceph-mon[74243]: pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:38:03 compute-0 podman[168802]: 2025-10-11 04:38:03.456464321 +0000 UTC m=+0.107996778 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:38:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:38:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:05 compute-0 ceph-mon[74243]: pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:38:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:38:07 compute-0 ceph-mon[74243]: pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:38:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:38:09 compute-0 podman[168876]: 2025-10-11 04:38:09.394410527 +0000 UTC m=+0.048343851 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 11 04:38:09 compute-0 ceph-mon[74243]: pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:38:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Oct 11 04:38:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:38:10.997 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:38:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:38:10.998 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:38:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:38:10.998 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:38:11 compute-0 ceph-mon[74243]: pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Oct 11 04:38:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Oct 11 04:38:13 compute-0 ceph-mon[74243]: pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Oct 11 04:38:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:14 compute-0 sudo[169024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:38:14 compute-0 sudo[169024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:38:14 compute-0 sudo[169024]: pam_unix(sudo:session): session closed for user root
Oct 11 04:38:15 compute-0 sudo[169049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:38:15 compute-0 sudo[169049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:38:15 compute-0 sudo[169049]: pam_unix(sudo:session): session closed for user root
Oct 11 04:38:15 compute-0 sudo[169074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:38:15 compute-0 sudo[169074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:38:15 compute-0 sudo[169074]: pam_unix(sudo:session): session closed for user root
Oct 11 04:38:15 compute-0 sudo[169099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:38:15 compute-0 sudo[169099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:38:15 compute-0 ceph-mon[74243]: pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:15 compute-0 sudo[169099]: pam_unix(sudo:session): session closed for user root
Oct 11 04:38:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 04:38:15 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:38:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:38:15 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:38:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:38:15 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:38:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:38:15 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:38:15 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 593b784d-bf25-4d49-b436-e28f8ff5b826 does not exist
Oct 11 04:38:15 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev e2b9210a-47eb-47d0-99f5-76786a3d7f74 does not exist
Oct 11 04:38:15 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 47f64c4d-1367-490c-aba7-951e7c9f1199 does not exist
Oct 11 04:38:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:38:15 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:38:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:38:15 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:38:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:38:15 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:38:15 compute-0 sudo[169156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:38:15 compute-0 sudo[169156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:38:15 compute-0 sudo[169156]: pam_unix(sudo:session): session closed for user root
Oct 11 04:38:15 compute-0 sudo[169181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:38:16 compute-0 sudo[169181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:38:16 compute-0 sudo[169181]: pam_unix(sudo:session): session closed for user root
Oct 11 04:38:16 compute-0 sudo[169206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:38:16 compute-0 sudo[169206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:38:16 compute-0 sudo[169206]: pam_unix(sudo:session): session closed for user root
Oct 11 04:38:16 compute-0 sudo[169231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:38:16 compute-0 sudo[169231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:38:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:38:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:38:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:38:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:38:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:38:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:38:16 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:38:16 compute-0 podman[169296]: 2025-10-11 04:38:16.556917848 +0000 UTC m=+0.060426967 container create 7c57e4ac60d5072508447746c928edca112187c3f74cc493b94c0b1ab2429922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_solomon, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 04:38:16 compute-0 systemd[1]: Started libpod-conmon-7c57e4ac60d5072508447746c928edca112187c3f74cc493b94c0b1ab2429922.scope.
Oct 11 04:38:16 compute-0 podman[169296]: 2025-10-11 04:38:16.529700033 +0000 UTC m=+0.033209202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:38:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:38:16 compute-0 podman[169296]: 2025-10-11 04:38:16.681632773 +0000 UTC m=+0.185141942 container init 7c57e4ac60d5072508447746c928edca112187c3f74cc493b94c0b1ab2429922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_solomon, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:38:16 compute-0 podman[169296]: 2025-10-11 04:38:16.693958944 +0000 UTC m=+0.197468063 container start 7c57e4ac60d5072508447746c928edca112187c3f74cc493b94c0b1ab2429922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_solomon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:38:16 compute-0 podman[169296]: 2025-10-11 04:38:16.701746395 +0000 UTC m=+0.205255564 container attach 7c57e4ac60d5072508447746c928edca112187c3f74cc493b94c0b1ab2429922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:38:16 compute-0 intelligent_solomon[169313]: 167 167
Oct 11 04:38:16 compute-0 systemd[1]: libpod-7c57e4ac60d5072508447746c928edca112187c3f74cc493b94c0b1ab2429922.scope: Deactivated successfully.
Oct 11 04:38:16 compute-0 podman[169296]: 2025-10-11 04:38:16.714036185 +0000 UTC m=+0.217545334 container died 7c57e4ac60d5072508447746c928edca112187c3f74cc493b94c0b1ab2429922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_solomon, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:38:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-55ed09034336a1d0fea1f2d19a948e8ebdb807fe8ebfdf8a9c6c4fdfefc1b0c7-merged.mount: Deactivated successfully.
Oct 11 04:38:16 compute-0 podman[169296]: 2025-10-11 04:38:16.762304733 +0000 UTC m=+0.265813842 container remove 7c57e4ac60d5072508447746c928edca112187c3f74cc493b94c0b1ab2429922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:38:16 compute-0 systemd[1]: libpod-conmon-7c57e4ac60d5072508447746c928edca112187c3f74cc493b94c0b1ab2429922.scope: Deactivated successfully.
Oct 11 04:38:16 compute-0 podman[169336]: 2025-10-11 04:38:16.941765936 +0000 UTC m=+0.060131600 container create c89a837a4e232551873fa74ad68c16fd3a6486faf2bce7a70a858d23d99ee4b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:38:16 compute-0 systemd[1]: Started libpod-conmon-c89a837a4e232551873fa74ad68c16fd3a6486faf2bce7a70a858d23d99ee4b5.scope.
Oct 11 04:38:17 compute-0 podman[169336]: 2025-10-11 04:38:16.916788186 +0000 UTC m=+0.035153910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:38:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134893d9a0097cae2d1913aaa689603142b7db790adf3d17ba48e2bd418081e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134893d9a0097cae2d1913aaa689603142b7db790adf3d17ba48e2bd418081e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134893d9a0097cae2d1913aaa689603142b7db790adf3d17ba48e2bd418081e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134893d9a0097cae2d1913aaa689603142b7db790adf3d17ba48e2bd418081e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134893d9a0097cae2d1913aaa689603142b7db790adf3d17ba48e2bd418081e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:38:17 compute-0 podman[169336]: 2025-10-11 04:38:17.058168178 +0000 UTC m=+0.176533902 container init c89a837a4e232551873fa74ad68c16fd3a6486faf2bce7a70a858d23d99ee4b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kowalevski, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 04:38:17 compute-0 podman[169336]: 2025-10-11 04:38:17.072708213 +0000 UTC m=+0.191073837 container start c89a837a4e232551873fa74ad68c16fd3a6486faf2bce7a70a858d23d99ee4b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 04:38:17 compute-0 podman[169336]: 2025-10-11 04:38:17.078211098 +0000 UTC m=+0.196576782 container attach c89a837a4e232551873fa74ad68c16fd3a6486faf2bce7a70a858d23d99ee4b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:38:17 compute-0 ceph-mon[74243]: pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:18 compute-0 dazzling_kowalevski[169352]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:38:18 compute-0 dazzling_kowalevski[169352]: --> relative data size: 1.0
Oct 11 04:38:18 compute-0 dazzling_kowalevski[169352]: --> All data devices are unavailable
Oct 11 04:38:18 compute-0 systemd[1]: libpod-c89a837a4e232551873fa74ad68c16fd3a6486faf2bce7a70a858d23d99ee4b5.scope: Deactivated successfully.
Oct 11 04:38:18 compute-0 systemd[1]: libpod-c89a837a4e232551873fa74ad68c16fd3a6486faf2bce7a70a858d23d99ee4b5.scope: Consumed 1.053s CPU time.
Oct 11 04:38:18 compute-0 podman[169336]: 2025-10-11 04:38:18.21636563 +0000 UTC m=+1.334731354 container died c89a837a4e232551873fa74ad68c16fd3a6486faf2bce7a70a858d23d99ee4b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:38:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-134893d9a0097cae2d1913aaa689603142b7db790adf3d17ba48e2bd418081e4-merged.mount: Deactivated successfully.
Oct 11 04:38:18 compute-0 podman[169336]: 2025-10-11 04:38:18.292592232 +0000 UTC m=+1.410957876 container remove c89a837a4e232551873fa74ad68c16fd3a6486faf2bce7a70a858d23d99ee4b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kowalevski, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:38:18 compute-0 systemd[1]: libpod-conmon-c89a837a4e232551873fa74ad68c16fd3a6486faf2bce7a70a858d23d99ee4b5.scope: Deactivated successfully.
Oct 11 04:38:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:18 compute-0 sudo[169231]: pam_unix(sudo:session): session closed for user root
Oct 11 04:38:18 compute-0 sudo[169399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:38:18 compute-0 sudo[169399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:38:18 compute-0 sudo[169399]: pam_unix(sudo:session): session closed for user root
Oct 11 04:38:18 compute-0 sudo[169424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:38:18 compute-0 sudo[169424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:38:18 compute-0 sudo[169424]: pam_unix(sudo:session): session closed for user root
Oct 11 04:38:18 compute-0 sudo[169449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:38:18 compute-0 sudo[169449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:38:18 compute-0 sudo[169449]: pam_unix(sudo:session): session closed for user root
Oct 11 04:38:18 compute-0 sudo[169474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:38:18 compute-0 sudo[169474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:38:19 compute-0 podman[169537]: 2025-10-11 04:38:19.060291187 +0000 UTC m=+0.070616085 container create b6290ab4557228ac7e7329146d52546d0110dd5a6fbf1a0e42002fc5d09c134c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_moore, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:38:19 compute-0 podman[169537]: 2025-10-11 04:38:19.027107367 +0000 UTC m=+0.037432305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:38:19 compute-0 systemd[1]: Started libpod-conmon-b6290ab4557228ac7e7329146d52546d0110dd5a6fbf1a0e42002fc5d09c134c.scope.
Oct 11 04:38:19 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:38:19 compute-0 podman[169537]: 2025-10-11 04:38:19.172686192 +0000 UTC m=+0.183011150 container init b6290ab4557228ac7e7329146d52546d0110dd5a6fbf1a0e42002fc5d09c134c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:38:19 compute-0 podman[169537]: 2025-10-11 04:38:19.184364127 +0000 UTC m=+0.194689025 container start b6290ab4557228ac7e7329146d52546d0110dd5a6fbf1a0e42002fc5d09c134c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:38:19 compute-0 podman[169537]: 2025-10-11 04:38:19.188253412 +0000 UTC m=+0.198578370 container attach b6290ab4557228ac7e7329146d52546d0110dd5a6fbf1a0e42002fc5d09c134c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_moore, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:38:19 compute-0 amazing_moore[169554]: 167 167
Oct 11 04:38:19 compute-0 systemd[1]: libpod-b6290ab4557228ac7e7329146d52546d0110dd5a6fbf1a0e42002fc5d09c134c.scope: Deactivated successfully.
Oct 11 04:38:19 compute-0 podman[169537]: 2025-10-11 04:38:19.192136397 +0000 UTC m=+0.202461265 container died b6290ab4557228ac7e7329146d52546d0110dd5a6fbf1a0e42002fc5d09c134c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:38:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2bea970e2164e2d2cca2f1a9f4cb729b929c4ae03510583c3a831c5f115e81a-merged.mount: Deactivated successfully.
Oct 11 04:38:19 compute-0 podman[169537]: 2025-10-11 04:38:19.238275784 +0000 UTC m=+0.248600652 container remove b6290ab4557228ac7e7329146d52546d0110dd5a6fbf1a0e42002fc5d09c134c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_moore, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:38:19 compute-0 systemd[1]: libpod-conmon-b6290ab4557228ac7e7329146d52546d0110dd5a6fbf1a0e42002fc5d09c134c.scope: Deactivated successfully.
Oct 11 04:38:19 compute-0 ceph-mon[74243]: pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:19 compute-0 podman[169578]: 2025-10-11 04:38:19.501502481 +0000 UTC m=+0.074437168 container create 6b12498f3ae6fd1b8eaff12bb21849eb2997e29d07324f4ade35457b18a63acb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 11 04:38:19 compute-0 systemd[1]: Started libpod-conmon-6b12498f3ae6fd1b8eaff12bb21849eb2997e29d07324f4ade35457b18a63acb.scope.
Oct 11 04:38:19 compute-0 podman[169578]: 2025-10-11 04:38:19.472416161 +0000 UTC m=+0.045350908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:38:19 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c465e4427e55219b645b602708e31c985136ca07631d83e9dfdf9d70aace8749/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c465e4427e55219b645b602708e31c985136ca07631d83e9dfdf9d70aace8749/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c465e4427e55219b645b602708e31c985136ca07631d83e9dfdf9d70aace8749/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c465e4427e55219b645b602708e31c985136ca07631d83e9dfdf9d70aace8749/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:38:19 compute-0 podman[169578]: 2025-10-11 04:38:19.651076133 +0000 UTC m=+0.224010810 container init 6b12498f3ae6fd1b8eaff12bb21849eb2997e29d07324f4ade35457b18a63acb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:38:19 compute-0 podman[169578]: 2025-10-11 04:38:19.662322678 +0000 UTC m=+0.235257355 container start 6b12498f3ae6fd1b8eaff12bb21849eb2997e29d07324f4ade35457b18a63acb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:38:19 compute-0 podman[169578]: 2025-10-11 04:38:19.666368717 +0000 UTC m=+0.239303414 container attach 6b12498f3ae6fd1b8eaff12bb21849eb2997e29d07324f4ade35457b18a63acb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:38:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:20 compute-0 distracted_noyce[169595]: {
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:     "0": [
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:         {
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "devices": [
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "/dev/loop3"
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             ],
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "lv_name": "ceph_lv0",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "lv_size": "21470642176",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "name": "ceph_lv0",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "tags": {
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.cluster_name": "ceph",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.crush_device_class": "",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.encrypted": "0",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.osd_id": "0",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.type": "block",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.vdo": "0"
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             },
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "type": "block",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "vg_name": "ceph_vg0"
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:         }
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:     ],
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:     "1": [
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:         {
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "devices": [
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "/dev/loop4"
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             ],
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "lv_name": "ceph_lv1",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "lv_size": "21470642176",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "name": "ceph_lv1",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "tags": {
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.cluster_name": "ceph",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.crush_device_class": "",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.encrypted": "0",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.osd_id": "1",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.type": "block",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.vdo": "0"
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             },
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "type": "block",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "vg_name": "ceph_vg1"
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:         }
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:     ],
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:     "2": [
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:         {
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "devices": [
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "/dev/loop5"
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             ],
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "lv_name": "ceph_lv2",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "lv_size": "21470642176",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "name": "ceph_lv2",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "tags": {
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.cluster_name": "ceph",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.crush_device_class": "",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.encrypted": "0",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.osd_id": "2",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.type": "block",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:                 "ceph.vdo": "0"
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             },
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "type": "block",
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:             "vg_name": "ceph_vg2"
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:         }
Oct 11 04:38:20 compute-0 distracted_noyce[169595]:     ]
Oct 11 04:38:20 compute-0 distracted_noyce[169595]: }
Oct 11 04:38:20 compute-0 systemd[1]: libpod-6b12498f3ae6fd1b8eaff12bb21849eb2997e29d07324f4ade35457b18a63acb.scope: Deactivated successfully.
Oct 11 04:38:20 compute-0 podman[169578]: 2025-10-11 04:38:20.448324072 +0000 UTC m=+1.021258749 container died 6b12498f3ae6fd1b8eaff12bb21849eb2997e29d07324f4ade35457b18a63acb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noyce, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:38:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c465e4427e55219b645b602708e31c985136ca07631d83e9dfdf9d70aace8749-merged.mount: Deactivated successfully.
Oct 11 04:38:20 compute-0 podman[169578]: 2025-10-11 04:38:20.527764282 +0000 UTC m=+1.100698949 container remove 6b12498f3ae6fd1b8eaff12bb21849eb2997e29d07324f4ade35457b18a63acb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:38:20 compute-0 systemd[1]: libpod-conmon-6b12498f3ae6fd1b8eaff12bb21849eb2997e29d07324f4ade35457b18a63acb.scope: Deactivated successfully.
Oct 11 04:38:20 compute-0 sudo[169474]: pam_unix(sudo:session): session closed for user root
Oct 11 04:38:20 compute-0 sudo[169619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:38:20 compute-0 sudo[169619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:38:20 compute-0 sudo[169619]: pam_unix(sudo:session): session closed for user root
Oct 11 04:38:20 compute-0 sudo[169644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:38:20 compute-0 sudo[169644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:38:20 compute-0 sudo[169644]: pam_unix(sudo:session): session closed for user root
Oct 11 04:38:20 compute-0 sudo[169669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:38:20 compute-0 sudo[169669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:38:20 compute-0 sudo[169669]: pam_unix(sudo:session): session closed for user root
Oct 11 04:38:20 compute-0 sudo[169694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:38:20 compute-0 sudo[169694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:38:21 compute-0 podman[169760]: 2025-10-11 04:38:21.354259894 +0000 UTC m=+0.068646447 container create 947ec327fd33de64b0f4a7a35f9a6932196f02076bde8b9c9b2d4e7ac6cb22bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:38:21 compute-0 systemd[1]: Started libpod-conmon-947ec327fd33de64b0f4a7a35f9a6932196f02076bde8b9c9b2d4e7ac6cb22bb.scope.
Oct 11 04:38:21 compute-0 podman[169760]: 2025-10-11 04:38:21.326575998 +0000 UTC m=+0.040962611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:38:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:38:21 compute-0 podman[169760]: 2025-10-11 04:38:21.449242584 +0000 UTC m=+0.163629127 container init 947ec327fd33de64b0f4a7a35f9a6932196f02076bde8b9c9b2d4e7ac6cb22bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_roentgen, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:38:21 compute-0 podman[169760]: 2025-10-11 04:38:21.461045322 +0000 UTC m=+0.175431875 container start 947ec327fd33de64b0f4a7a35f9a6932196f02076bde8b9c9b2d4e7ac6cb22bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:38:21 compute-0 podman[169760]: 2025-10-11 04:38:21.464989958 +0000 UTC m=+0.179376571 container attach 947ec327fd33de64b0f4a7a35f9a6932196f02076bde8b9c9b2d4e7ac6cb22bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:38:21 compute-0 ceph-mon[74243]: pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:21 compute-0 amazing_roentgen[169777]: 167 167
Oct 11 04:38:21 compute-0 systemd[1]: libpod-947ec327fd33de64b0f4a7a35f9a6932196f02076bde8b9c9b2d4e7ac6cb22bb.scope: Deactivated successfully.
Oct 11 04:38:21 compute-0 podman[169760]: 2025-10-11 04:38:21.472586464 +0000 UTC m=+0.186973017 container died 947ec327fd33de64b0f4a7a35f9a6932196f02076bde8b9c9b2d4e7ac6cb22bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_roentgen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:38:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b76daca6d5d92d71bd31eb4fd2ab9d033e99bb179e07345246f774ac5d5af96d-merged.mount: Deactivated successfully.
Oct 11 04:38:21 compute-0 podman[169760]: 2025-10-11 04:38:21.537816506 +0000 UTC m=+0.252203049 container remove 947ec327fd33de64b0f4a7a35f9a6932196f02076bde8b9c9b2d4e7ac6cb22bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_roentgen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:38:21 compute-0 systemd[1]: libpod-conmon-947ec327fd33de64b0f4a7a35f9a6932196f02076bde8b9c9b2d4e7ac6cb22bb.scope: Deactivated successfully.
Oct 11 04:38:21 compute-0 podman[169801]: 2025-10-11 04:38:21.787514284 +0000 UTC m=+0.068484003 container create 1bdd805e557dd0c67133d5ac8e61938a0742663248640867d3c45dc23f3c805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:38:21 compute-0 systemd[1]: Started libpod-conmon-1bdd805e557dd0c67133d5ac8e61938a0742663248640867d3c45dc23f3c805c.scope.
Oct 11 04:38:21 compute-0 podman[169801]: 2025-10-11 04:38:21.758088605 +0000 UTC m=+0.039058384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:38:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f9e0e2152331df03c9aa9cdb95fcbc5b6c87cf326556b451eef0c54d511d75d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f9e0e2152331df03c9aa9cdb95fcbc5b6c87cf326556b451eef0c54d511d75d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f9e0e2152331df03c9aa9cdb95fcbc5b6c87cf326556b451eef0c54d511d75d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f9e0e2152331df03c9aa9cdb95fcbc5b6c87cf326556b451eef0c54d511d75d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:38:21 compute-0 podman[169801]: 2025-10-11 04:38:21.900234646 +0000 UTC m=+0.181204395 container init 1bdd805e557dd0c67133d5ac8e61938a0742663248640867d3c45dc23f3c805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:38:21 compute-0 podman[169801]: 2025-10-11 04:38:21.913395678 +0000 UTC m=+0.194365397 container start 1bdd805e557dd0c67133d5ac8e61938a0742663248640867d3c45dc23f3c805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_thompson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:38:21 compute-0 podman[169801]: 2025-10-11 04:38:21.916914874 +0000 UTC m=+0.197884603 container attach 1bdd805e557dd0c67133d5ac8e61938a0742663248640867d3c45dc23f3c805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:38:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:22 compute-0 eager_thompson[169818]: {
Oct 11 04:38:22 compute-0 eager_thompson[169818]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:38:22 compute-0 eager_thompson[169818]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:38:22 compute-0 eager_thompson[169818]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:38:22 compute-0 eager_thompson[169818]:         "osd_id": 1,
Oct 11 04:38:22 compute-0 eager_thompson[169818]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:38:22 compute-0 eager_thompson[169818]:         "type": "bluestore"
Oct 11 04:38:22 compute-0 eager_thompson[169818]:     },
Oct 11 04:38:22 compute-0 eager_thompson[169818]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:38:22 compute-0 eager_thompson[169818]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:38:22 compute-0 eager_thompson[169818]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:38:22 compute-0 eager_thompson[169818]:         "osd_id": 0,
Oct 11 04:38:22 compute-0 eager_thompson[169818]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:38:22 compute-0 eager_thompson[169818]:         "type": "bluestore"
Oct 11 04:38:22 compute-0 eager_thompson[169818]:     },
Oct 11 04:38:22 compute-0 eager_thompson[169818]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:38:22 compute-0 eager_thompson[169818]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:38:22 compute-0 eager_thompson[169818]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:38:22 compute-0 eager_thompson[169818]:         "osd_id": 2,
Oct 11 04:38:22 compute-0 eager_thompson[169818]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:38:22 compute-0 eager_thompson[169818]:         "type": "bluestore"
Oct 11 04:38:22 compute-0 eager_thompson[169818]:     }
Oct 11 04:38:22 compute-0 eager_thompson[169818]: }
Oct 11 04:38:23 compute-0 systemd[1]: libpod-1bdd805e557dd0c67133d5ac8e61938a0742663248640867d3c45dc23f3c805c.scope: Deactivated successfully.
Oct 11 04:38:23 compute-0 podman[169801]: 2025-10-11 04:38:23.024264153 +0000 UTC m=+1.305233852 container died 1bdd805e557dd0c67133d5ac8e61938a0742663248640867d3c45dc23f3c805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:38:23 compute-0 systemd[1]: libpod-1bdd805e557dd0c67133d5ac8e61938a0742663248640867d3c45dc23f3c805c.scope: Consumed 1.108s CPU time.
Oct 11 04:38:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f9e0e2152331df03c9aa9cdb95fcbc5b6c87cf326556b451eef0c54d511d75d-merged.mount: Deactivated successfully.
Oct 11 04:38:23 compute-0 podman[169801]: 2025-10-11 04:38:23.104574184 +0000 UTC m=+1.385543913 container remove 1bdd805e557dd0c67133d5ac8e61938a0742663248640867d3c45dc23f3c805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 04:38:23 compute-0 systemd[1]: libpod-conmon-1bdd805e557dd0c67133d5ac8e61938a0742663248640867d3c45dc23f3c805c.scope: Deactivated successfully.
Oct 11 04:38:23 compute-0 sudo[169694]: pam_unix(sudo:session): session closed for user root
Oct 11 04:38:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:38:23 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:38:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:38:23 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:38:23 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 4813bd24-4aaa-48fe-adda-9273725415a9 does not exist
Oct 11 04:38:23 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev fb6d6c6b-0eb0-4997-85fa-9d32b63bde86 does not exist
Oct 11 04:38:23 compute-0 sudo[169864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:38:23 compute-0 sudo[169864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:38:23 compute-0 sudo[169864]: pam_unix(sudo:session): session closed for user root
Oct 11 04:38:23 compute-0 sudo[169889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:38:23 compute-0 sudo[169889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:38:23 compute-0 sudo[169889]: pam_unix(sudo:session): session closed for user root
Oct 11 04:38:23 compute-0 ceph-mon[74243]: pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:23 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:38:23 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:38:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:25 compute-0 ceph-mon[74243]: pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:38:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:38:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:38:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:38:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:38:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:38:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:27 compute-0 ceph-mon[74243]: pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:28 compute-0 kernel: SELinux:  Converting 2765 SID table entries...
Oct 11 04:38:28 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 04:38:28 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 11 04:38:28 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 04:38:28 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 11 04:38:28 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 04:38:28 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 04:38:28 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 04:38:29 compute-0 ceph-mon[74243]: pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:30 compute-0 ceph-mon[74243]: pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:33 compute-0 ceph-mon[74243]: pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:34 compute-0 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct 11 04:38:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:34 compute-0 podman[169922]: 2025-10-11 04:38:34.522965657 +0000 UTC m=+0.162970190 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 04:38:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:35 compute-0 ceph-mon[74243]: pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:37 compute-0 ceph-mon[74243]: pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:37 compute-0 kernel: SELinux:  Converting 2765 SID table entries...
Oct 11 04:38:37 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 04:38:37 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 11 04:38:37 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 04:38:37 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 11 04:38:37 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 04:38:37 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 04:38:37 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 04:38:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:39 compute-0 ceph-mon[74243]: pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:40 compute-0 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct 11 04:38:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:40 compute-0 podman[169957]: 2025-10-11 04:38:40.45464282 +0000 UTC m=+0.094356875 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 11 04:38:41 compute-0 ceph-mon[74243]: pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:43 compute-0 ceph-mon[74243]: pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:44 compute-0 ceph-mon[74243]: pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:47 compute-0 ceph-mon[74243]: pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:49 compute-0 ceph-mon[74243]: pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:51 compute-0 ceph-mon[74243]: pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:53 compute-0 ceph-mon[74243]: pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:55 compute-0 ceph-mon[74243]: pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:38:56
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'images', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', '.rgw.root', '.mgr', 'cephfs.cephfs.data']
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:38:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:57 compute-0 ceph-mon[74243]: pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:59 compute-0 ceph-mon[74243]: pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:00 compute-0 ceph-mon[74243]: pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:03 compute-0 ceph-mon[74243]: pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:05 compute-0 ceph-mon[74243]: pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:05 compute-0 podman[177963]: 2025-10-11 04:39:05.420404053 +0000 UTC m=+0.073019874 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:39:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:07 compute-0 ceph-mon[74243]: pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:09 compute-0 ceph-mon[74243]: pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:10 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:39:10.998 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:39:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:39:10.999 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:39:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:39:10.999 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:39:11 compute-0 podman[180780]: 2025-10-11 04:39:11.424359541 +0000 UTC m=+0.078867147 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 11 04:39:11 compute-0 ceph-mon[74243]: pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:13 compute-0 ceph-mon[74243]: pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:15 compute-0 ceph-mon[74243]: pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:17 compute-0 ceph-mon[74243]: pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:19 compute-0 ceph-mon[74243]: pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:20 compute-0 ceph-mon[74243]: pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:23 compute-0 sudo[186165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:39:23 compute-0 sudo[186165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:23 compute-0 sudo[186165]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:23 compute-0 ceph-mon[74243]: pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:23 compute-0 sudo[186229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:39:23 compute-0 sudo[186229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:23 compute-0 sudo[186229]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:23 compute-0 sudo[186295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:39:23 compute-0 sudo[186295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:23 compute-0 sudo[186295]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:23 compute-0 sudo[186360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 04:39:23 compute-0 sudo[186360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:24 compute-0 podman[186758]: 2025-10-11 04:39:24.102679184 +0000 UTC m=+0.057432803 container exec 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:39:24 compute-0 podman[186758]: 2025-10-11 04:39:24.189657638 +0000 UTC m=+0.144411247 container exec_died 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:39:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:25 compute-0 sudo[186360]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:39:25 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:39:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:39:25 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:39:25 compute-0 sudo[187094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:39:25 compute-0 sudo[187094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:25 compute-0 sudo[187094]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:25 compute-0 sudo[187119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:39:25 compute-0 sudo[187119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:25 compute-0 sudo[187119]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:25 compute-0 sudo[187144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:39:25 compute-0 sudo[187144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:25 compute-0 sudo[187144]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:25 compute-0 ceph-mon[74243]: pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:25 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:39:25 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:39:25 compute-0 sudo[187169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:39:25 compute-0 sudo[187169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:26 compute-0 sudo[187169]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:39:26 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:39:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:39:26 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:39:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:39:26 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:39:26 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 2bf8d9c0-de9f-428a-a10c-dba0bcfbb29d does not exist
Oct 11 04:39:26 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev e1e5ac37-3fdc-4d76-8055-778427da2dae does not exist
Oct 11 04:39:26 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 4f56cdd6-172f-456d-bf74-b8c75649e7ae does not exist
Oct 11 04:39:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:39:26 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:39:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:39:26 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:39:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:39:26 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:39:26 compute-0 sudo[187226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:39:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:39:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:39:26 compute-0 sudo[187226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:26 compute-0 sudo[187226]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:39:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:39:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:39:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:39:26 compute-0 sudo[187251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:39:26 compute-0 sudo[187251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:26 compute-0 sudo[187251]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:26 compute-0 sudo[187276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:39:26 compute-0 sudo[187276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:26 compute-0 sudo[187276]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:26 compute-0 sudo[187304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:39:26 compute-0 sudo[187304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:26 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:39:26 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:39:26 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:39:26 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:39:26 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:39:26 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:39:26 compute-0 podman[187368]: 2025-10-11 04:39:26.842551013 +0000 UTC m=+0.065437789 container create 84c8e15ef4d4060ee99fbb46af252b312a765bfa9d21d89e1f6140ef53093e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:39:26 compute-0 systemd[1]: Started libpod-conmon-84c8e15ef4d4060ee99fbb46af252b312a765bfa9d21d89e1f6140ef53093e3e.scope.
Oct 11 04:39:26 compute-0 podman[187368]: 2025-10-11 04:39:26.814811656 +0000 UTC m=+0.037698492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:39:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:39:26 compute-0 podman[187368]: 2025-10-11 04:39:26.950428267 +0000 UTC m=+0.173315013 container init 84c8e15ef4d4060ee99fbb46af252b312a765bfa9d21d89e1f6140ef53093e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_darwin, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:39:26 compute-0 podman[187368]: 2025-10-11 04:39:26.960983135 +0000 UTC m=+0.183869871 container start 84c8e15ef4d4060ee99fbb46af252b312a765bfa9d21d89e1f6140ef53093e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 04:39:26 compute-0 podman[187368]: 2025-10-11 04:39:26.967951795 +0000 UTC m=+0.190838561 container attach 84c8e15ef4d4060ee99fbb46af252b312a765bfa9d21d89e1f6140ef53093e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_darwin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:39:26 compute-0 fervent_darwin[187390]: 167 167
Oct 11 04:39:26 compute-0 systemd[1]: libpod-84c8e15ef4d4060ee99fbb46af252b312a765bfa9d21d89e1f6140ef53093e3e.scope: Deactivated successfully.
Oct 11 04:39:26 compute-0 podman[187368]: 2025-10-11 04:39:26.970841695 +0000 UTC m=+0.193728481 container died 84c8e15ef4d4060ee99fbb46af252b312a765bfa9d21d89e1f6140ef53093e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:39:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-203bcecd36441de349c33b34b8fc8aad547dc0a9092a8e12b8b4c84884d7bacb-merged.mount: Deactivated successfully.
Oct 11 04:39:27 compute-0 podman[187368]: 2025-10-11 04:39:27.025851709 +0000 UTC m=+0.248738455 container remove 84c8e15ef4d4060ee99fbb46af252b312a765bfa9d21d89e1f6140ef53093e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_darwin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 04:39:27 compute-0 systemd[1]: libpod-conmon-84c8e15ef4d4060ee99fbb46af252b312a765bfa9d21d89e1f6140ef53093e3e.scope: Deactivated successfully.
Oct 11 04:39:27 compute-0 podman[187414]: 2025-10-11 04:39:27.244395784 +0000 UTC m=+0.059351840 container create 8bceeee1467670af1d3e2e65dd66ede9960e181b84dd61acc094cc209cc66ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bell, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:39:27 compute-0 systemd[1]: Started libpod-conmon-8bceeee1467670af1d3e2e65dd66ede9960e181b84dd61acc094cc209cc66ff7.scope.
Oct 11 04:39:27 compute-0 podman[187414]: 2025-10-11 04:39:27.220972222 +0000 UTC m=+0.035928298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:39:27 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/276a1d4f75e5e8149c9c1aae32c57b88fc77b96b5783dcac7471b8abf64c8842/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/276a1d4f75e5e8149c9c1aae32c57b88fc77b96b5783dcac7471b8abf64c8842/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/276a1d4f75e5e8149c9c1aae32c57b88fc77b96b5783dcac7471b8abf64c8842/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/276a1d4f75e5e8149c9c1aae32c57b88fc77b96b5783dcac7471b8abf64c8842/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/276a1d4f75e5e8149c9c1aae32c57b88fc77b96b5783dcac7471b8abf64c8842/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:27 compute-0 podman[187414]: 2025-10-11 04:39:27.365506361 +0000 UTC m=+0.180462437 container init 8bceeee1467670af1d3e2e65dd66ede9960e181b84dd61acc094cc209cc66ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bell, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:39:27 compute-0 podman[187414]: 2025-10-11 04:39:27.377035043 +0000 UTC m=+0.191991109 container start 8bceeee1467670af1d3e2e65dd66ede9960e181b84dd61acc094cc209cc66ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bell, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:39:27 compute-0 podman[187414]: 2025-10-11 04:39:27.383403548 +0000 UTC m=+0.198359694 container attach 8bceeee1467670af1d3e2e65dd66ede9960e181b84dd61acc094cc209cc66ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:39:27 compute-0 ceph-mon[74243]: pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:28 compute-0 magical_bell[187431]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:39:28 compute-0 magical_bell[187431]: --> relative data size: 1.0
Oct 11 04:39:28 compute-0 magical_bell[187431]: --> All data devices are unavailable
Oct 11 04:39:28 compute-0 systemd[1]: libpod-8bceeee1467670af1d3e2e65dd66ede9960e181b84dd61acc094cc209cc66ff7.scope: Deactivated successfully.
Oct 11 04:39:28 compute-0 systemd[1]: libpod-8bceeee1467670af1d3e2e65dd66ede9960e181b84dd61acc094cc209cc66ff7.scope: Consumed 1.033s CPU time.
Oct 11 04:39:28 compute-0 podman[187414]: 2025-10-11 04:39:28.469732443 +0000 UTC m=+1.284688539 container died 8bceeee1467670af1d3e2e65dd66ede9960e181b84dd61acc094cc209cc66ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:39:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-276a1d4f75e5e8149c9c1aae32c57b88fc77b96b5783dcac7471b8abf64c8842-merged.mount: Deactivated successfully.
Oct 11 04:39:28 compute-0 podman[187414]: 2025-10-11 04:39:28.549838459 +0000 UTC m=+1.364794555 container remove 8bceeee1467670af1d3e2e65dd66ede9960e181b84dd61acc094cc209cc66ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bell, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:39:28 compute-0 systemd[1]: libpod-conmon-8bceeee1467670af1d3e2e65dd66ede9960e181b84dd61acc094cc209cc66ff7.scope: Deactivated successfully.
Oct 11 04:39:28 compute-0 sudo[187304]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:28 compute-0 sudo[187473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:39:28 compute-0 sudo[187473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:28 compute-0 sudo[187473]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:28 compute-0 sudo[187498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:39:28 compute-0 sudo[187498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:28 compute-0 sudo[187498]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:28 compute-0 sudo[187523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:39:28 compute-0 sudo[187523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:28 compute-0 sudo[187523]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:28 compute-0 sudo[187548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:39:28 compute-0 sudo[187548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:29 compute-0 podman[187610]: 2025-10-11 04:39:29.350181441 +0000 UTC m=+0.052722278 container create dd0c59254a49c8d6c7b761dab9c4812ff7439538cffc3073dabf313a334f7fcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chatterjee, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:39:29 compute-0 systemd[1]: Started libpod-conmon-dd0c59254a49c8d6c7b761dab9c4812ff7439538cffc3073dabf313a334f7fcf.scope.
Oct 11 04:39:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:39:29 compute-0 podman[187610]: 2025-10-11 04:39:29.327259772 +0000 UTC m=+0.029800639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:39:29 compute-0 podman[187610]: 2025-10-11 04:39:29.436339585 +0000 UTC m=+0.138880442 container init dd0c59254a49c8d6c7b761dab9c4812ff7439538cffc3073dabf313a334f7fcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chatterjee, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:39:29 compute-0 podman[187610]: 2025-10-11 04:39:29.4439284 +0000 UTC m=+0.146469267 container start dd0c59254a49c8d6c7b761dab9c4812ff7439538cffc3073dabf313a334f7fcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:39:29 compute-0 ceph-mon[74243]: pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:29 compute-0 systemd[1]: libpod-dd0c59254a49c8d6c7b761dab9c4812ff7439538cffc3073dabf313a334f7fcf.scope: Deactivated successfully.
Oct 11 04:39:29 compute-0 podman[187610]: 2025-10-11 04:39:29.451171357 +0000 UTC m=+0.153712274 container attach dd0c59254a49c8d6c7b761dab9c4812ff7439538cffc3073dabf313a334f7fcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:39:29 compute-0 interesting_chatterjee[187626]: 167 167
Oct 11 04:39:29 compute-0 podman[187610]: 2025-10-11 04:39:29.452848268 +0000 UTC m=+0.155389175 container died dd0c59254a49c8d6c7b761dab9c4812ff7439538cffc3073dabf313a334f7fcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chatterjee, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:39:29 compute-0 conmon[187626]: conmon dd0c59254a49c8d6c7b7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd0c59254a49c8d6c7b761dab9c4812ff7439538cffc3073dabf313a334f7fcf.scope/container/memory.events
Oct 11 04:39:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-18fa2753e813c876df84fb1d3f5859840adff12b90ebfbaed86ddbf3ac3cf17b-merged.mount: Deactivated successfully.
Oct 11 04:39:29 compute-0 podman[187610]: 2025-10-11 04:39:29.509151543 +0000 UTC m=+0.211692380 container remove dd0c59254a49c8d6c7b761dab9c4812ff7439538cffc3073dabf313a334f7fcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chatterjee, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:39:29 compute-0 systemd[1]: libpod-conmon-dd0c59254a49c8d6c7b761dab9c4812ff7439538cffc3073dabf313a334f7fcf.scope: Deactivated successfully.
Oct 11 04:39:29 compute-0 podman[187650]: 2025-10-11 04:39:29.735683734 +0000 UTC m=+0.062999019 container create 934b1c9607ddc4dc4a22302a23da91677901ec61d44faa935f2f372a06845cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wing, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:39:29 compute-0 systemd[1]: Started libpod-conmon-934b1c9607ddc4dc4a22302a23da91677901ec61d44faa935f2f372a06845cdf.scope.
Oct 11 04:39:29 compute-0 podman[187650]: 2025-10-11 04:39:29.712849317 +0000 UTC m=+0.040164642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:39:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b287f7c63e6b18d702c4c83b136d8c0b7c9dfef9344669450fd202d40088cf2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b287f7c63e6b18d702c4c83b136d8c0b7c9dfef9344669450fd202d40088cf2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b287f7c63e6b18d702c4c83b136d8c0b7c9dfef9344669450fd202d40088cf2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b287f7c63e6b18d702c4c83b136d8c0b7c9dfef9344669450fd202d40088cf2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:29 compute-0 podman[187650]: 2025-10-11 04:39:29.839164521 +0000 UTC m=+0.166479826 container init 934b1c9607ddc4dc4a22302a23da91677901ec61d44faa935f2f372a06845cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:39:29 compute-0 podman[187650]: 2025-10-11 04:39:29.856668568 +0000 UTC m=+0.183983853 container start 934b1c9607ddc4dc4a22302a23da91677901ec61d44faa935f2f372a06845cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 04:39:29 compute-0 podman[187650]: 2025-10-11 04:39:29.860682586 +0000 UTC m=+0.187997861 container attach 934b1c9607ddc4dc4a22302a23da91677901ec61d44faa935f2f372a06845cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wing, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:39:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:30 compute-0 sleepy_wing[187667]: {
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:     "0": [
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:         {
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "devices": [
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "/dev/loop3"
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             ],
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "lv_name": "ceph_lv0",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "lv_size": "21470642176",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "name": "ceph_lv0",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "tags": {
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.cluster_name": "ceph",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.crush_device_class": "",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.encrypted": "0",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.osd_id": "0",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.type": "block",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.vdo": "0"
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             },
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "type": "block",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "vg_name": "ceph_vg0"
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:         }
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:     ],
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:     "1": [
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:         {
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "devices": [
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "/dev/loop4"
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             ],
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "lv_name": "ceph_lv1",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "lv_size": "21470642176",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "name": "ceph_lv1",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "tags": {
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.cluster_name": "ceph",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.crush_device_class": "",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.encrypted": "0",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.osd_id": "1",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.type": "block",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.vdo": "0"
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             },
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "type": "block",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "vg_name": "ceph_vg1"
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:         }
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:     ],
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:     "2": [
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:         {
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "devices": [
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "/dev/loop5"
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             ],
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "lv_name": "ceph_lv2",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "lv_size": "21470642176",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "name": "ceph_lv2",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "tags": {
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.cluster_name": "ceph",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.crush_device_class": "",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.encrypted": "0",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.osd_id": "2",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.type": "block",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:                 "ceph.vdo": "0"
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             },
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "type": "block",
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:             "vg_name": "ceph_vg2"
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:         }
Oct 11 04:39:30 compute-0 sleepy_wing[187667]:     ]
Oct 11 04:39:30 compute-0 sleepy_wing[187667]: }
Oct 11 04:39:30 compute-0 systemd[1]: libpod-934b1c9607ddc4dc4a22302a23da91677901ec61d44faa935f2f372a06845cdf.scope: Deactivated successfully.
Oct 11 04:39:30 compute-0 podman[187650]: 2025-10-11 04:39:30.635641557 +0000 UTC m=+0.962956892 container died 934b1c9607ddc4dc4a22302a23da91677901ec61d44faa935f2f372a06845cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wing, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 04:39:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b287f7c63e6b18d702c4c83b136d8c0b7c9dfef9344669450fd202d40088cf2e-merged.mount: Deactivated successfully.
Oct 11 04:39:30 compute-0 podman[187650]: 2025-10-11 04:39:30.824721144 +0000 UTC m=+1.152036459 container remove 934b1c9607ddc4dc4a22302a23da91677901ec61d44faa935f2f372a06845cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wing, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:39:30 compute-0 systemd[1]: libpod-conmon-934b1c9607ddc4dc4a22302a23da91677901ec61d44faa935f2f372a06845cdf.scope: Deactivated successfully.
Oct 11 04:39:30 compute-0 sudo[187548]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:30 compute-0 sudo[187688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:39:30 compute-0 sudo[187688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:30 compute-0 sudo[187688]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:31 compute-0 sudo[187713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:39:31 compute-0 sudo[187713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:31 compute-0 sudo[187713]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:31 compute-0 sudo[187738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:39:31 compute-0 sudo[187738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:31 compute-0 sudo[187738]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:31 compute-0 sudo[187763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:39:31 compute-0 sudo[187763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:31 compute-0 ceph-mon[74243]: pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:31 compute-0 podman[187827]: 2025-10-11 04:39:31.685476491 +0000 UTC m=+0.064941776 container create 62ac2aa2ededb1408f7bcd65df9301aa9712623c17a069e21fa6330ba9f9fb5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 04:39:31 compute-0 systemd[1]: Started libpod-conmon-62ac2aa2ededb1408f7bcd65df9301aa9712623c17a069e21fa6330ba9f9fb5c.scope.
Oct 11 04:39:31 compute-0 podman[187827]: 2025-10-11 04:39:31.658872562 +0000 UTC m=+0.038337907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:39:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:39:31 compute-0 podman[187827]: 2025-10-11 04:39:31.785277788 +0000 UTC m=+0.164743123 container init 62ac2aa2ededb1408f7bcd65df9301aa9712623c17a069e21fa6330ba9f9fb5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_joliot, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:39:31 compute-0 podman[187827]: 2025-10-11 04:39:31.799885105 +0000 UTC m=+0.179350390 container start 62ac2aa2ededb1408f7bcd65df9301aa9712623c17a069e21fa6330ba9f9fb5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_joliot, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:39:31 compute-0 podman[187827]: 2025-10-11 04:39:31.804571389 +0000 UTC m=+0.184036724 container attach 62ac2aa2ededb1408f7bcd65df9301aa9712623c17a069e21fa6330ba9f9fb5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_joliot, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:39:31 compute-0 eloquent_joliot[187844]: 167 167
Oct 11 04:39:31 compute-0 systemd[1]: libpod-62ac2aa2ededb1408f7bcd65df9301aa9712623c17a069e21fa6330ba9f9fb5c.scope: Deactivated successfully.
Oct 11 04:39:31 compute-0 podman[187827]: 2025-10-11 04:39:31.808203028 +0000 UTC m=+0.187668343 container died 62ac2aa2ededb1408f7bcd65df9301aa9712623c17a069e21fa6330ba9f9fb5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_joliot, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:39:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-179a87aa8266b424dbf580ddd468fe8a2106d3ec9e9fd19bd084bb1fdabfa6e2-merged.mount: Deactivated successfully.
Oct 11 04:39:31 compute-0 podman[187827]: 2025-10-11 04:39:31.870638753 +0000 UTC m=+0.250104008 container remove 62ac2aa2ededb1408f7bcd65df9301aa9712623c17a069e21fa6330ba9f9fb5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_joliot, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct 11 04:39:31 compute-0 systemd[1]: libpod-conmon-62ac2aa2ededb1408f7bcd65df9301aa9712623c17a069e21fa6330ba9f9fb5c.scope: Deactivated successfully.
Oct 11 04:39:32 compute-0 podman[187869]: 2025-10-11 04:39:32.086515574 +0000 UTC m=+0.053094398 container create 726afeb49ca4f96c78d4165d2346018399456fad9071ba4c21d185677a53e8cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mclaren, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:39:32 compute-0 systemd[1]: Started libpod-conmon-726afeb49ca4f96c78d4165d2346018399456fad9071ba4c21d185677a53e8cf.scope.
Oct 11 04:39:32 compute-0 podman[187869]: 2025-10-11 04:39:32.059664248 +0000 UTC m=+0.026243092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:39:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:39:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528733a192d30ddfd89326549d35f3c4b9ef43dfffbfad505b5cfb2ca2518b98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528733a192d30ddfd89326549d35f3c4b9ef43dfffbfad505b5cfb2ca2518b98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528733a192d30ddfd89326549d35f3c4b9ef43dfffbfad505b5cfb2ca2518b98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528733a192d30ddfd89326549d35f3c4b9ef43dfffbfad505b5cfb2ca2518b98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:32 compute-0 podman[187869]: 2025-10-11 04:39:32.186770192 +0000 UTC m=+0.153349046 container init 726afeb49ca4f96c78d4165d2346018399456fad9071ba4c21d185677a53e8cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mclaren, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:39:32 compute-0 podman[187869]: 2025-10-11 04:39:32.198762254 +0000 UTC m=+0.165341058 container start 726afeb49ca4f96c78d4165d2346018399456fad9071ba4c21d185677a53e8cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:39:32 compute-0 podman[187869]: 2025-10-11 04:39:32.203460119 +0000 UTC m=+0.170038943 container attach 726afeb49ca4f96c78d4165d2346018399456fad9071ba4c21d185677a53e8cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mclaren, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:39:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]: {
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:         "osd_id": 1,
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:         "type": "bluestore"
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:     },
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:         "osd_id": 0,
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:         "type": "bluestore"
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:     },
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:         "osd_id": 2,
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:         "type": "bluestore"
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]:     }
Oct 11 04:39:33 compute-0 vigilant_mclaren[187886]: }
Oct 11 04:39:33 compute-0 systemd[1]: libpod-726afeb49ca4f96c78d4165d2346018399456fad9071ba4c21d185677a53e8cf.scope: Deactivated successfully.
Oct 11 04:39:33 compute-0 podman[187869]: 2025-10-11 04:39:33.317015859 +0000 UTC m=+1.283594713 container died 726afeb49ca4f96c78d4165d2346018399456fad9071ba4c21d185677a53e8cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 04:39:33 compute-0 systemd[1]: libpod-726afeb49ca4f96c78d4165d2346018399456fad9071ba4c21d185677a53e8cf.scope: Consumed 1.129s CPU time.
Oct 11 04:39:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-528733a192d30ddfd89326549d35f3c4b9ef43dfffbfad505b5cfb2ca2518b98-merged.mount: Deactivated successfully.
Oct 11 04:39:33 compute-0 podman[187869]: 2025-10-11 04:39:33.393104647 +0000 UTC m=+1.359683441 container remove 726afeb49ca4f96c78d4165d2346018399456fad9071ba4c21d185677a53e8cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:39:33 compute-0 systemd[1]: libpod-conmon-726afeb49ca4f96c78d4165d2346018399456fad9071ba4c21d185677a53e8cf.scope: Deactivated successfully.
Oct 11 04:39:33 compute-0 sudo[187763]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:33 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:39:33 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:39:33 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:39:33 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:39:33 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 99e557af-97ce-4d82-9c69-4a6f86811413 does not exist
Oct 11 04:39:33 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 3a25f4b0-2d70-4504-9e45-9714a1b374f1 does not exist
Oct 11 04:39:33 compute-0 ceph-mon[74243]: pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:33 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:39:33 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:39:33 compute-0 sudo[187933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:39:33 compute-0 sudo[187933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:33 compute-0 sudo[187933]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:33 compute-0 sudo[187958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:39:33 compute-0 sudo[187958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:39:33 compute-0 sudo[187958]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:35 compute-0 ceph-mon[74243]: pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:36 compute-0 podman[187987]: 2025-10-11 04:39:36.465172706 +0000 UTC m=+0.114353473 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 11 04:39:36 compute-0 kernel: SELinux:  Converting 2766 SID table entries...
Oct 11 04:39:36 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 04:39:36 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 11 04:39:36 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 04:39:36 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 11 04:39:36 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 04:39:36 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 04:39:36 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 04:39:37 compute-0 ceph-mon[74243]: pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:37 compute-0 groupadd[188021]: group added to /etc/group: name=dnsmasq, GID=991
Oct 11 04:39:37 compute-0 groupadd[188021]: group added to /etc/gshadow: name=dnsmasq
Oct 11 04:39:37 compute-0 groupadd[188021]: new group: name=dnsmasq, GID=991
Oct 11 04:39:37 compute-0 useradd[188028]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Oct 11 04:39:37 compute-0 dbus-broker-launch[777]: Noticed file-system modification, trigger reload.
Oct 11 04:39:37 compute-0 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct 11 04:39:37 compute-0 dbus-broker-launch[777]: Noticed file-system modification, trigger reload.
Oct 11 04:39:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:38 compute-0 groupadd[188041]: group added to /etc/group: name=clevis, GID=990
Oct 11 04:39:38 compute-0 groupadd[188041]: group added to /etc/gshadow: name=clevis
Oct 11 04:39:38 compute-0 groupadd[188041]: new group: name=clevis, GID=990
Oct 11 04:39:38 compute-0 useradd[188048]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Oct 11 04:39:39 compute-0 usermod[188058]: add 'clevis' to group 'tss'
Oct 11 04:39:39 compute-0 usermod[188058]: add 'clevis' to shadow group 'tss'
Oct 11 04:39:39 compute-0 ceph-mon[74243]: pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:41 compute-0 polkitd[6176]: Reloading rules
Oct 11 04:39:41 compute-0 polkitd[6176]: Collecting garbage unconditionally...
Oct 11 04:39:41 compute-0 polkitd[6176]: Loading rules from directory /etc/polkit-1/rules.d
Oct 11 04:39:41 compute-0 polkitd[6176]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 11 04:39:41 compute-0 polkitd[6176]: Finished loading, compiling and executing 4 rules
Oct 11 04:39:41 compute-0 polkitd[6176]: Reloading rules
Oct 11 04:39:41 compute-0 polkitd[6176]: Collecting garbage unconditionally...
Oct 11 04:39:41 compute-0 polkitd[6176]: Loading rules from directory /etc/polkit-1/rules.d
Oct 11 04:39:41 compute-0 polkitd[6176]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 11 04:39:41 compute-0 polkitd[6176]: Finished loading, compiling and executing 4 rules
Oct 11 04:39:41 compute-0 ceph-mon[74243]: pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:42 compute-0 podman[188152]: 2025-10-11 04:39:42.004660579 +0000 UTC m=+0.122104542 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 04:39:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:43 compute-0 groupadd[188265]: group added to /etc/group: name=ceph, GID=167
Oct 11 04:39:43 compute-0 groupadd[188265]: group added to /etc/gshadow: name=ceph
Oct 11 04:39:43 compute-0 groupadd[188265]: new group: name=ceph, GID=167
Oct 11 04:39:43 compute-0 useradd[188271]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Oct 11 04:39:43 compute-0 ceph-mon[74243]: pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:44 compute-0 ceph-mon[74243]: pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:45 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Oct 11 04:39:45 compute-0 sshd[1006]: Received signal 15; terminating.
Oct 11 04:39:45 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Oct 11 04:39:46 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Oct 11 04:39:46 compute-0 systemd[1]: sshd.service: Consumed 2.587s CPU time, no IO.
Oct 11 04:39:46 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Oct 11 04:39:46 compute-0 systemd[1]: Stopping sshd-keygen.target...
Oct 11 04:39:46 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 04:39:46 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 04:39:46 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 04:39:46 compute-0 systemd[1]: Reached target sshd-keygen.target.
Oct 11 04:39:46 compute-0 systemd[1]: Starting OpenSSH server daemon...
Oct 11 04:39:46 compute-0 sshd[188896]: Server listening on 0.0.0.0 port 22.
Oct 11 04:39:46 compute-0 sshd[188896]: Server listening on :: port 22.
Oct 11 04:39:46 compute-0 systemd[1]: Started OpenSSH server daemon.
Oct 11 04:39:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:47 compute-0 ceph-mon[74243]: pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:39:47.422153) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157587422199, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2045, "num_deletes": 251, "total_data_size": 3533719, "memory_usage": 3585744, "flush_reason": "Manual Compaction"}
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157587446481, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3447884, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9657, "largest_seqno": 11701, "table_properties": {"data_size": 3438583, "index_size": 5923, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17897, "raw_average_key_size": 19, "raw_value_size": 3420154, "raw_average_value_size": 3721, "num_data_blocks": 269, "num_entries": 919, "num_filter_entries": 919, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760157356, "oldest_key_time": 1760157356, "file_creation_time": 1760157587, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 24361 microseconds, and 6434 cpu microseconds.
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:39:47.446518) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3447884 bytes OK
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:39:47.446536) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:39:47.448467) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:39:47.448484) EVENT_LOG_v1 {"time_micros": 1760157587448478, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:39:47.448502) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3525177, prev total WAL file size 3525177, number of live WAL files 2.
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:39:47.449706) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3367KB)], [26(5936KB)]
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157587449772, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9526579, "oldest_snapshot_seqno": -1}
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3677 keys, 7976650 bytes, temperature: kUnknown
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157587491574, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7976650, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7948312, "index_size": 18052, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9221, "raw_key_size": 88299, "raw_average_key_size": 24, "raw_value_size": 7878200, "raw_average_value_size": 2142, "num_data_blocks": 783, "num_entries": 3677, "num_filter_entries": 3677, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156706, "oldest_key_time": 0, "file_creation_time": 1760157587, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:39:47.491766) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7976650 bytes
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:39:47.493147) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 227.6 rd, 190.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.8 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4191, records dropped: 514 output_compression: NoCompression
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:39:47.493162) EVENT_LOG_v1 {"time_micros": 1760157587493154, "job": 10, "event": "compaction_finished", "compaction_time_micros": 41859, "compaction_time_cpu_micros": 16396, "output_level": 6, "num_output_files": 1, "total_output_size": 7976650, "num_input_records": 4191, "num_output_records": 3677, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157587493745, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157587494804, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:39:47.449564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:39:47.494892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:39:47.494900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:39:47.494904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:39:47.494907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:39:47 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:39:47.494909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:39:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:48 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 04:39:48 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 11 04:39:48 compute-0 systemd[1]: Reloading.
Oct 11 04:39:48 compute-0 systemd-rc-local-generator[189152]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:39:48 compute-0 systemd-sysv-generator[189156]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:39:49 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 04:39:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:50 compute-0 ceph-mon[74243]: pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:51 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 11 04:39:51 compute-0 PackageKit[191266]: daemon start
Oct 11 04:39:51 compute-0 ceph-mon[74243]: pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:51 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 11 04:39:51 compute-0 sudo[168792]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:52 compute-0 sudo[192805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngrsuvsusljjtihvdkfzdozemocknast ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157591.7431223-336-44777429993306/AnsiballZ_systemd.py'
Oct 11 04:39:52 compute-0 sudo[192805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:39:52 compute-0 python3.9[192828]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 04:39:52 compute-0 systemd[1]: Reloading.
Oct 11 04:39:52 compute-0 systemd-rc-local-generator[193283]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:39:52 compute-0 systemd-sysv-generator[193288]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:39:53 compute-0 sudo[192805]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:53 compute-0 ceph-mon[74243]: pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:53 compute-0 sudo[194060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jazbkldejckaecpzzdidlehxnenchspd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157593.3401852-336-143180368261217/AnsiballZ_systemd.py'
Oct 11 04:39:53 compute-0 sudo[194060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:39:53 compute-0 python3.9[194085]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 04:39:54 compute-0 systemd[1]: Reloading.
Oct 11 04:39:54 compute-0 systemd-rc-local-generator[194497]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:39:54 compute-0 systemd-sysv-generator[194500]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:39:54 compute-0 sudo[194060]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:54 compute-0 sudo[195260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjakqovfhflvvecpnednrguplcixykmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157594.513738-336-277754283654040/AnsiballZ_systemd.py'
Oct 11 04:39:54 compute-0 sudo[195260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:39:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:55 compute-0 python3.9[195285]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 04:39:55 compute-0 ceph-mon[74243]: pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:39:56
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', '.rgw.root', 'images', 'backups', 'cephfs.cephfs.meta', '.mgr']
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:39:56 compute-0 systemd[1]: Reloading.
Oct 11 04:39:56 compute-0 systemd-rc-local-generator[196572]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:39:56 compute-0 systemd-sysv-generator[196578]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:39:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:56 compute-0 sudo[195260]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:57 compute-0 sudo[197409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgqlafliwfclregallarxdpluzxhahwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157596.7473195-336-31068056428112/AnsiballZ_systemd.py'
Oct 11 04:39:57 compute-0 sudo[197409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:39:57 compute-0 python3.9[197426]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 04:39:57 compute-0 ceph-mon[74243]: pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:57 compute-0 systemd[1]: Reloading.
Oct 11 04:39:57 compute-0 systemd-sysv-generator[197873]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:39:57 compute-0 systemd-rc-local-generator[197866]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:39:57 compute-0 sudo[197409]: pam_unix(sudo:session): session closed for user root
Oct 11 04:39:58 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 04:39:58 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 11 04:39:58 compute-0 systemd[1]: man-db-cache-update.service: Consumed 11.965s CPU time.
Oct 11 04:39:58 compute-0 systemd[1]: run-r901e86ef097f40e4862d72ad8b31fd30.service: Deactivated successfully.
Oct 11 04:39:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:58 compute-0 sudo[198313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgtbntujvsqigsglqquuibewvzejtsil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157598.0637226-365-17224730285392/AnsiballZ_systemd.py'
Oct 11 04:39:58 compute-0 sudo[198313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:39:58 compute-0 python3.9[198315]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:39:59 compute-0 ceph-mon[74243]: pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:59 compute-0 systemd[1]: Reloading.
Oct 11 04:39:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:59 compute-0 systemd-sysv-generator[198350]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:39:59 compute-0 systemd-rc-local-generator[198347]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:40:00 compute-0 auditd[704]: Audit daemon rotating log files
Oct 11 04:40:00 compute-0 sudo[198313]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:00 compute-0 sudo[198504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwzfsvtngwxskvzaqebquvrzmubbeshb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157600.4199097-365-106550710597573/AnsiballZ_systemd.py'
Oct 11 04:40:00 compute-0 sudo[198504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:01 compute-0 python3.9[198506]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:40:01 compute-0 systemd[1]: Reloading.
Oct 11 04:40:01 compute-0 systemd-sysv-generator[198543]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:40:01 compute-0 systemd-rc-local-generator[198536]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:40:01 compute-0 ceph-mon[74243]: pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:01 compute-0 sudo[198504]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:02 compute-0 sudo[198694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmrxpcvjmojdcawvmsosjrwpjdmczzdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157601.713259-365-100906457906813/AnsiballZ_systemd.py'
Oct 11 04:40:02 compute-0 sudo[198694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:02 compute-0 python3.9[198696]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:40:02 compute-0 systemd[1]: Reloading.
Oct 11 04:40:02 compute-0 systemd-sysv-generator[198731]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:40:02 compute-0 systemd-rc-local-generator[198727]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:40:03 compute-0 sudo[198694]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:03 compute-0 ceph-mon[74243]: pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:03 compute-0 sudo[198884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbopafyzmsuiaahljjohimjywrkqvalt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157603.2198498-365-128363548603711/AnsiballZ_systemd.py'
Oct 11 04:40:03 compute-0 sudo[198884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:03 compute-0 python3.9[198886]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:40:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:40:04 compute-0 sudo[198884]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:05 compute-0 ceph-mon[74243]: pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:05 compute-0 sudo[199039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgpjtlvechuhxrveamghgheutqqjqfxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157605.128445-365-157702084884714/AnsiballZ_systemd.py'
Oct 11 04:40:05 compute-0 sudo[199039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:05 compute-0 python3.9[199041]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:40:05 compute-0 systemd[1]: Reloading.
Oct 11 04:40:06 compute-0 systemd-rc-local-generator[199071]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:40:06 compute-0 systemd-sysv-generator[199076]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:40:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:06 compute-0 sudo[199039]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:06 compute-0 sudo[199242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sytvcreksnvbldxvqkvozicabrxvhggr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157606.580992-401-195124634805799/AnsiballZ_systemd.py'
Oct 11 04:40:06 compute-0 sudo[199242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:06 compute-0 podman[199203]: 2025-10-11 04:40:06.986662021 +0000 UTC m=+0.109269182 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:40:07 compute-0 python3.9[199248]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 04:40:07 compute-0 ceph-mon[74243]: pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:08 compute-0 systemd[1]: Reloading.
Oct 11 04:40:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:08 compute-0 systemd-rc-local-generator[199286]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:40:08 compute-0 systemd-sysv-generator[199291]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:40:08 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 11 04:40:08 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 11 04:40:08 compute-0 sudo[199242]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:09 compute-0 sudo[199448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gglkqaryjehgjxhakthniaqzuddcnepz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157609.0553203-409-76506802865589/AnsiballZ_systemd.py'
Oct 11 04:40:09 compute-0 sudo[199448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:09 compute-0 ceph-mon[74243]: pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:09 compute-0 python3.9[199450]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:40:09 compute-0 sudo[199448]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:40:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:10 compute-0 sudo[199603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzdpnxyzcmzjrpgstsphdisjoyzdwzdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157609.9841478-409-269463317087802/AnsiballZ_systemd.py'
Oct 11 04:40:10 compute-0 sudo[199603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:10 compute-0 python3.9[199605]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:40:10 compute-0 sudo[199603]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:40:11.000 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:40:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:40:11.002 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:40:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:40:11.002 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:40:11 compute-0 sudo[199758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbbiaewdbodoijdbfnmyqzciqgetskth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157610.9512756-409-21837051767070/AnsiballZ_systemd.py'
Oct 11 04:40:11 compute-0 sudo[199758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:11 compute-0 ceph-mon[74243]: pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:11 compute-0 python3.9[199760]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:40:11 compute-0 sudo[199758]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:12 compute-0 sudo[199930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpagmboseqqymweumtenpfacuewqzzdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157611.7930925-409-11589899782732/AnsiballZ_systemd.py'
Oct 11 04:40:12 compute-0 podman[199887]: 2025-10-11 04:40:12.189902962 +0000 UTC m=+0.067844901 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true)
Oct 11 04:40:12 compute-0 sudo[199930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:12 compute-0 python3.9[199934]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:40:13 compute-0 ceph-mon[74243]: pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:13 compute-0 sudo[199930]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:14 compute-0 sudo[200088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbalkxzjdmeuzqingqkurndwirzfcrgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157613.8003724-409-187251719149852/AnsiballZ_systemd.py'
Oct 11 04:40:14 compute-0 sudo[200088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:14 compute-0 python3.9[200090]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:40:14 compute-0 sudo[200088]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:40:15 compute-0 sudo[200243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gffyopnvjngvdcivkxqzxpktmmvhsonl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157614.8388898-409-157282340872611/AnsiballZ_systemd.py'
Oct 11 04:40:15 compute-0 sudo[200243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:15 compute-0 python3.9[200245]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:40:15 compute-0 ceph-mon[74243]: pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:15 compute-0 sudo[200243]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:16 compute-0 sudo[200398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cquaxxlqkvzdqycidnrjdzvfveddgbzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157615.8042622-409-225673719894672/AnsiballZ_systemd.py'
Oct 11 04:40:16 compute-0 sudo[200398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:16 compute-0 python3.9[200400]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:40:16 compute-0 sudo[200398]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:17 compute-0 sudo[200553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkednilgzolzraruhxabpsmjvkjifnia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157616.743858-409-124832433317838/AnsiballZ_systemd.py'
Oct 11 04:40:17 compute-0 sudo[200553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:17 compute-0 python3.9[200555]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:40:17 compute-0 sudo[200553]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:17 compute-0 ceph-mon[74243]: pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:18 compute-0 sudo[200708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycywweehfqmmypcluvtxnapbqvrsfiut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157617.665881-409-161431694124471/AnsiballZ_systemd.py'
Oct 11 04:40:18 compute-0 sudo[200708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:18 compute-0 python3.9[200710]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:40:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:18 compute-0 sudo[200708]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:18 compute-0 sudo[200863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrfxwnczflunqzqabhstwlaphkwskggq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157618.5631826-409-145785614472558/AnsiballZ_systemd.py'
Oct 11 04:40:18 compute-0 sudo[200863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:19 compute-0 python3.9[200865]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:40:19 compute-0 ceph-mon[74243]: pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:40:20 compute-0 sudo[200863]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:20 compute-0 sudo[201018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bieqsyaegbqyowmelucpzegumgtaauko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157620.6349862-409-187284113467858/AnsiballZ_systemd.py'
Oct 11 04:40:20 compute-0 sudo[201018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:21 compute-0 python3.9[201020]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:40:21 compute-0 sudo[201018]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:21 compute-0 ceph-mon[74243]: pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:21 compute-0 sudo[201173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbxamjnvakmmmlpmorhjnppqqsnegazw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157621.5260582-409-90655552982343/AnsiballZ_systemd.py'
Oct 11 04:40:21 compute-0 sudo[201173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:22 compute-0 python3.9[201175]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:40:22 compute-0 sudo[201173]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:22 compute-0 sudo[201328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kygjkymoheifrfsdrkdfqkywesvdblnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157622.4744542-409-84519479224264/AnsiballZ_systemd.py'
Oct 11 04:40:22 compute-0 sudo[201328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:23 compute-0 python3.9[201330]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:40:23 compute-0 sudo[201328]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:23 compute-0 ceph-mon[74243]: pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:23 compute-0 sudo[201483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dumjyadjmamehpyfkngedsjqehpqjmrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157623.4243748-409-188175263675422/AnsiballZ_systemd.py'
Oct 11 04:40:23 compute-0 sudo[201483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:24 compute-0 python3.9[201485]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:40:24 compute-0 sudo[201483]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:24 compute-0 ceph-mon[74243]: pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:40:25 compute-0 sudo[201638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaysgdcdojiibckykrcxyujzycjwfayf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157624.7610245-511-114831017824123/AnsiballZ_file.py'
Oct 11 04:40:25 compute-0 sudo[201638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:25 compute-0 python3.9[201640]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:40:25 compute-0 sudo[201638]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:25 compute-0 sudo[201790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pymngwfohfuzitaazwzoiefgofaczcbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157625.5805001-511-231623425243517/AnsiballZ_file.py'
Oct 11 04:40:25 compute-0 sudo[201790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:26 compute-0 python3.9[201792]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:40:26 compute-0 sudo[201790]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:40:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:40:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:40:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:40:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:40:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:40:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:26 compute-0 sudo[201942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcdjeoaazsjkuksjikteskvebwhucipw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157626.284911-511-266247605054974/AnsiballZ_file.py'
Oct 11 04:40:26 compute-0 sudo[201942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:26 compute-0 python3.9[201944]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:40:26 compute-0 sudo[201942]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:27 compute-0 ceph-mon[74243]: pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:27 compute-0 sudo[202094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dakieqvbzuqbjrntvhttqfxuuxjyemvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157627.1093225-511-22736266962487/AnsiballZ_file.py'
Oct 11 04:40:27 compute-0 sudo[202094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:27 compute-0 python3.9[202096]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:40:27 compute-0 sudo[202094]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:28 compute-0 sudo[202246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnofcwppgxvyojwmbxmqwfketxcicewv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157627.8890893-511-111556394238101/AnsiballZ_file.py'
Oct 11 04:40:28 compute-0 sudo[202246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:28 compute-0 python3.9[202248]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:40:28 compute-0 sudo[202246]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:29 compute-0 sudo[202398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kopkqwqiapxgmkephgsjnqwmhxaiubyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157628.7062435-511-252781596178691/AnsiballZ_file.py'
Oct 11 04:40:29 compute-0 sudo[202398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:29 compute-0 python3.9[202400]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:40:29 compute-0 sudo[202398]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:29 compute-0 ceph-mon[74243]: pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:40:30 compute-0 sudo[202550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckauornxxcxtvbubwqqqzqxqvfigprlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157629.5411658-554-34229986599380/AnsiballZ_stat.py'
Oct 11 04:40:30 compute-0 sudo[202550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:30 compute-0 python3.9[202552]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:40:30 compute-0 sudo[202550]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:30 compute-0 sudo[202675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmvjdjjqfemdfxqpkjhilqhyyxhgvhrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157629.5411658-554-34229986599380/AnsiballZ_copy.py'
Oct 11 04:40:30 compute-0 sudo[202675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:31 compute-0 python3.9[202677]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760157629.5411658-554-34229986599380/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:31 compute-0 sudo[202675]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:31 compute-0 ceph-mon[74243]: pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:31 compute-0 sudo[202827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqutxyvqajouwzzsvylmojetyjswstqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157631.3875391-554-192121362943561/AnsiballZ_stat.py'
Oct 11 04:40:31 compute-0 sudo[202827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:32 compute-0 python3.9[202829]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:40:32 compute-0 sudo[202827]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:32 compute-0 sudo[202952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzclgqyxzkeajioaxdpgxffsxigyyhmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157631.3875391-554-192121362943561/AnsiballZ_copy.py'
Oct 11 04:40:32 compute-0 sudo[202952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:32 compute-0 python3.9[202954]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760157631.3875391-554-192121362943561/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:32 compute-0 sudo[202952]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:33 compute-0 sudo[203104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crbqfbcxmfxhzqxcfcvtnjcknzkvdztl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157633.0052342-554-226825809150906/AnsiballZ_stat.py'
Oct 11 04:40:33 compute-0 sudo[203104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:33 compute-0 ceph-mon[74243]: pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:33 compute-0 python3.9[203106]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:40:33 compute-0 sudo[203104]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:33 compute-0 sudo[203108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:40:33 compute-0 sudo[203108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:40:33 compute-0 sudo[203108]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:33 compute-0 sudo[203134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:40:33 compute-0 sudo[203134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:40:33 compute-0 sudo[203134]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:33 compute-0 sudo[203182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:40:33 compute-0 sudo[203182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:40:33 compute-0 sudo[203182]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:33 compute-0 sudo[203231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:40:33 compute-0 sudo[203231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:40:34 compute-0 sudo[203341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tggdfvbqntwocxakqfokwftehzljvdqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157633.0052342-554-226825809150906/AnsiballZ_copy.py'
Oct 11 04:40:34 compute-0 sudo[203341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:34 compute-0 python3.9[203345]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760157633.0052342-554-226825809150906/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:34 compute-0 sudo[203341]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:34 compute-0 sudo[203231]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:40:34 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:40:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:40:34 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:40:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:40:34 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:40:34 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 935a9959-ba67-4e41-9615-b17d67dc5baf does not exist
Oct 11 04:40:34 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 0ee7c39e-d28b-4da9-9f13-53812e6dba11 does not exist
Oct 11 04:40:34 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 09fffd10-25e8-48e3-b938-01abd569c14a does not exist
Oct 11 04:40:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:40:34 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:40:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:40:34 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:40:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:40:34 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:40:34 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:40:34 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:40:34 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:40:34 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:40:34 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:40:34 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:40:34 compute-0 sudo[203410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:40:34 compute-0 sudo[203410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:40:34 compute-0 sudo[203410]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:34 compute-0 sudo[203461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:40:34 compute-0 sudo[203461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:40:34 compute-0 sudo[203461]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:34 compute-0 sudo[203510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:40:34 compute-0 sudo[203510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:40:34 compute-0 sudo[203510]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:34 compute-0 sudo[203537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:40:34 compute-0 sudo[203537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:40:34 compute-0 sudo[203612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvrnkvqmfglyfwqbpcfnnwrgzpavpgme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157634.467363-554-138962008718320/AnsiballZ_stat.py'
Oct 11 04:40:34 compute-0 sudo[203612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:40:35 compute-0 podman[203655]: 2025-10-11 04:40:35.152438088 +0000 UTC m=+0.050191801 container create b0579cdc11d715782ec036f4882e57e43b9df2ed9d982cc290a2e78e71cb39c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:40:35 compute-0 python3.9[203614]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:40:35 compute-0 systemd[1]: Started libpod-conmon-b0579cdc11d715782ec036f4882e57e43b9df2ed9d982cc290a2e78e71cb39c4.scope.
Oct 11 04:40:35 compute-0 sudo[203612]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:35 compute-0 podman[203655]: 2025-10-11 04:40:35.126148453 +0000 UTC m=+0.023902216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:40:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:40:35 compute-0 podman[203655]: 2025-10-11 04:40:35.255809872 +0000 UTC m=+0.153563595 container init b0579cdc11d715782ec036f4882e57e43b9df2ed9d982cc290a2e78e71cb39c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:40:35 compute-0 podman[203655]: 2025-10-11 04:40:35.267631736 +0000 UTC m=+0.165385409 container start b0579cdc11d715782ec036f4882e57e43b9df2ed9d982cc290a2e78e71cb39c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 04:40:35 compute-0 podman[203655]: 2025-10-11 04:40:35.271454271 +0000 UTC m=+0.169207954 container attach b0579cdc11d715782ec036f4882e57e43b9df2ed9d982cc290a2e78e71cb39c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lovelace, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:40:35 compute-0 thirsty_lovelace[203673]: 167 167
Oct 11 04:40:35 compute-0 systemd[1]: libpod-b0579cdc11d715782ec036f4882e57e43b9df2ed9d982cc290a2e78e71cb39c4.scope: Deactivated successfully.
Oct 11 04:40:35 compute-0 podman[203655]: 2025-10-11 04:40:35.277681206 +0000 UTC m=+0.175434939 container died b0579cdc11d715782ec036f4882e57e43b9df2ed9d982cc290a2e78e71cb39c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:40:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f64eece073d0a8e2a48e029fbfc8241534baf4191815f908ffef766d1af88b5b-merged.mount: Deactivated successfully.
Oct 11 04:40:35 compute-0 podman[203655]: 2025-10-11 04:40:35.335017864 +0000 UTC m=+0.232771537 container remove b0579cdc11d715782ec036f4882e57e43b9df2ed9d982cc290a2e78e71cb39c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lovelace, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:40:35 compute-0 systemd[1]: libpod-conmon-b0579cdc11d715782ec036f4882e57e43b9df2ed9d982cc290a2e78e71cb39c4.scope: Deactivated successfully.
Oct 11 04:40:35 compute-0 ceph-mon[74243]: pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:35 compute-0 podman[203750]: 2025-10-11 04:40:35.514148775 +0000 UTC m=+0.058392085 container create 3965c2b8fd7ea7fa54773679e87c3c268ba0e0196db7910688d6218d331791fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:40:35 compute-0 systemd[1]: Started libpod-conmon-3965c2b8fd7ea7fa54773679e87c3c268ba0e0196db7910688d6218d331791fa.scope.
Oct 11 04:40:35 compute-0 podman[203750]: 2025-10-11 04:40:35.480740603 +0000 UTC m=+0.024983973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:40:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:40:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555039e21eda7c7ebbcc06e366dda020db7ec25cd27039719aabca45ddce8723/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555039e21eda7c7ebbcc06e366dda020db7ec25cd27039719aabca45ddce8723/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555039e21eda7c7ebbcc06e366dda020db7ec25cd27039719aabca45ddce8723/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555039e21eda7c7ebbcc06e366dda020db7ec25cd27039719aabca45ddce8723/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555039e21eda7c7ebbcc06e366dda020db7ec25cd27039719aabca45ddce8723/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:35 compute-0 podman[203750]: 2025-10-11 04:40:35.624966484 +0000 UTC m=+0.169209834 container init 3965c2b8fd7ea7fa54773679e87c3c268ba0e0196db7910688d6218d331791fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:40:35 compute-0 podman[203750]: 2025-10-11 04:40:35.640961472 +0000 UTC m=+0.185204822 container start 3965c2b8fd7ea7fa54773679e87c3c268ba0e0196db7910688d6218d331791fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:40:35 compute-0 podman[203750]: 2025-10-11 04:40:35.646278835 +0000 UTC m=+0.190522205 container attach 3965c2b8fd7ea7fa54773679e87c3c268ba0e0196db7910688d6218d331791fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_knuth, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:40:35 compute-0 sudo[203838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqeyjmlthsoavabpuplnotdflgtrgnqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157634.467363-554-138962008718320/AnsiballZ_copy.py'
Oct 11 04:40:35 compute-0 sudo[203838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:35 compute-0 python3.9[203840]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760157634.467363-554-138962008718320/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:35 compute-0 sudo[203838]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:36 compute-0 sudo[204004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glvnhvrjttjrivbouevmdsdarlwhlgpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157636.0980616-554-69700648210185/AnsiballZ_stat.py'
Oct 11 04:40:36 compute-0 sudo[204004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:36 compute-0 laughing_knuth[203802]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:40:36 compute-0 laughing_knuth[203802]: --> relative data size: 1.0
Oct 11 04:40:36 compute-0 laughing_knuth[203802]: --> All data devices are unavailable
Oct 11 04:40:36 compute-0 systemd[1]: libpod-3965c2b8fd7ea7fa54773679e87c3c268ba0e0196db7910688d6218d331791fa.scope: Deactivated successfully.
Oct 11 04:40:36 compute-0 python3.9[204006]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:40:36 compute-0 podman[204017]: 2025-10-11 04:40:36.718531914 +0000 UTC m=+0.024369918 container died 3965c2b8fd7ea7fa54773679e87c3c268ba0e0196db7910688d6218d331791fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:40:36 compute-0 sudo[204004]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-555039e21eda7c7ebbcc06e366dda020db7ec25cd27039719aabca45ddce8723-merged.mount: Deactivated successfully.
Oct 11 04:40:36 compute-0 podman[204017]: 2025-10-11 04:40:36.770012616 +0000 UTC m=+0.075850590 container remove 3965c2b8fd7ea7fa54773679e87c3c268ba0e0196db7910688d6218d331791fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_knuth, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:40:36 compute-0 systemd[1]: libpod-conmon-3965c2b8fd7ea7fa54773679e87c3c268ba0e0196db7910688d6218d331791fa.scope: Deactivated successfully.
Oct 11 04:40:36 compute-0 sudo[203537]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:36 compute-0 sudo[204034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:40:36 compute-0 sudo[204034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:40:36 compute-0 sudo[204034]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:36 compute-0 sudo[204059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:40:36 compute-0 sudo[204059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:40:36 compute-0 sudo[204059]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:36 compute-0 sudo[204107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:40:36 compute-0 sudo[204107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:40:36 compute-0 sudo[204107]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:37 compute-0 sudo[204158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:40:37 compute-0 sudo[204158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:40:37 compute-0 sudo[204274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xczfaxhejqjjvpqazegwvvselpmdlkoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157636.0980616-554-69700648210185/AnsiballZ_copy.py'
Oct 11 04:40:37 compute-0 sudo[204274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:37 compute-0 podman[204204]: 2025-10-11 04:40:37.177769599 +0000 UTC m=+0.094451173 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible)
Oct 11 04:40:37 compute-0 python3.9[204282]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760157636.0980616-554-69700648210185/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:37 compute-0 sudo[204274]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:37 compute-0 podman[204325]: 2025-10-11 04:40:37.453935096 +0000 UTC m=+0.051087453 container create e97e3d5ba50cbd9608ce3907ffce3601cf75fc5486cfb380a8f6f289c033ea71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_einstein, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:40:37 compute-0 ceph-mon[74243]: pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:37 compute-0 systemd[1]: Started libpod-conmon-e97e3d5ba50cbd9608ce3907ffce3601cf75fc5486cfb380a8f6f289c033ea71.scope.
Oct 11 04:40:37 compute-0 podman[204325]: 2025-10-11 04:40:37.427319253 +0000 UTC m=+0.024471660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:40:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:40:37 compute-0 podman[204325]: 2025-10-11 04:40:37.543123717 +0000 UTC m=+0.140276104 container init e97e3d5ba50cbd9608ce3907ffce3601cf75fc5486cfb380a8f6f289c033ea71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:40:37 compute-0 podman[204325]: 2025-10-11 04:40:37.55451987 +0000 UTC m=+0.151672277 container start e97e3d5ba50cbd9608ce3907ffce3601cf75fc5486cfb380a8f6f289c033ea71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:40:37 compute-0 podman[204325]: 2025-10-11 04:40:37.558554471 +0000 UTC m=+0.155706838 container attach e97e3d5ba50cbd9608ce3907ffce3601cf75fc5486cfb380a8f6f289c033ea71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_einstein, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:40:37 compute-0 vigilant_einstein[204367]: 167 167
Oct 11 04:40:37 compute-0 systemd[1]: libpod-e97e3d5ba50cbd9608ce3907ffce3601cf75fc5486cfb380a8f6f289c033ea71.scope: Deactivated successfully.
Oct 11 04:40:37 compute-0 podman[204397]: 2025-10-11 04:40:37.618267918 +0000 UTC m=+0.039012663 container died e97e3d5ba50cbd9608ce3907ffce3601cf75fc5486cfb380a8f6f289c033ea71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_einstein, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:40:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3801103506c78774e7ae9f03249079d3116944efa084eb5e531083ecaa440617-merged.mount: Deactivated successfully.
Oct 11 04:40:37 compute-0 podman[204397]: 2025-10-11 04:40:37.653939116 +0000 UTC m=+0.074683851 container remove e97e3d5ba50cbd9608ce3907ffce3601cf75fc5486cfb380a8f6f289c033ea71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_einstein, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:40:37 compute-0 systemd[1]: libpod-conmon-e97e3d5ba50cbd9608ce3907ffce3601cf75fc5486cfb380a8f6f289c033ea71.scope: Deactivated successfully.
Oct 11 04:40:37 compute-0 podman[204484]: 2025-10-11 04:40:37.844119612 +0000 UTC m=+0.052346135 container create 0dca87829712465d4b402c745e130a2cb04062366de40b3794467c0291844558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_snyder, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:40:37 compute-0 systemd[1]: Started libpod-conmon-0dca87829712465d4b402c745e130a2cb04062366de40b3794467c0291844558.scope.
Oct 11 04:40:37 compute-0 podman[204484]: 2025-10-11 04:40:37.821028337 +0000 UTC m=+0.029254950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:40:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:40:37 compute-0 sudo[204533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpzvprwuuuaiyvjzwhegkxjtueltpnpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157637.5427372-554-203240237155636/AnsiballZ_stat.py'
Oct 11 04:40:37 compute-0 sudo[204533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf3d8f2e2352ef5079804f1bad65111ad784bf4013cafdda4df151564e00a43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf3d8f2e2352ef5079804f1bad65111ad784bf4013cafdda4df151564e00a43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf3d8f2e2352ef5079804f1bad65111ad784bf4013cafdda4df151564e00a43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf3d8f2e2352ef5079804f1bad65111ad784bf4013cafdda4df151564e00a43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:37 compute-0 podman[204484]: 2025-10-11 04:40:37.948835009 +0000 UTC m=+0.157061612 container init 0dca87829712465d4b402c745e130a2cb04062366de40b3794467c0291844558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 04:40:37 compute-0 podman[204484]: 2025-10-11 04:40:37.957113485 +0000 UTC m=+0.165340008 container start 0dca87829712465d4b402c745e130a2cb04062366de40b3794467c0291844558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:40:37 compute-0 podman[204484]: 2025-10-11 04:40:37.960682314 +0000 UTC m=+0.168908937 container attach 0dca87829712465d4b402c745e130a2cb04062366de40b3794467c0291844558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:40:38 compute-0 python3.9[204538]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:40:38 compute-0 sudo[204533]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:38 compute-0 sudo[204663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrswnitemflvrfcpmjxguxvqlqgmcyft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157637.5427372-554-203240237155636/AnsiballZ_copy.py'
Oct 11 04:40:38 compute-0 sudo[204663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:38 compute-0 happy_snyder[204534]: {
Oct 11 04:40:38 compute-0 happy_snyder[204534]:     "0": [
Oct 11 04:40:38 compute-0 happy_snyder[204534]:         {
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "devices": [
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "/dev/loop3"
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             ],
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "lv_name": "ceph_lv0",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "lv_size": "21470642176",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "name": "ceph_lv0",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "tags": {
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.cluster_name": "ceph",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.crush_device_class": "",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.encrypted": "0",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.osd_id": "0",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.type": "block",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.vdo": "0"
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             },
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "type": "block",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "vg_name": "ceph_vg0"
Oct 11 04:40:38 compute-0 happy_snyder[204534]:         }
Oct 11 04:40:38 compute-0 happy_snyder[204534]:     ],
Oct 11 04:40:38 compute-0 happy_snyder[204534]:     "1": [
Oct 11 04:40:38 compute-0 happy_snyder[204534]:         {
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "devices": [
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "/dev/loop4"
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             ],
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "lv_name": "ceph_lv1",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "lv_size": "21470642176",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "name": "ceph_lv1",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "tags": {
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.cluster_name": "ceph",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.crush_device_class": "",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.encrypted": "0",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.osd_id": "1",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.type": "block",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.vdo": "0"
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             },
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "type": "block",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "vg_name": "ceph_vg1"
Oct 11 04:40:38 compute-0 happy_snyder[204534]:         }
Oct 11 04:40:38 compute-0 happy_snyder[204534]:     ],
Oct 11 04:40:38 compute-0 happy_snyder[204534]:     "2": [
Oct 11 04:40:38 compute-0 happy_snyder[204534]:         {
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "devices": [
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "/dev/loop5"
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             ],
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "lv_name": "ceph_lv2",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "lv_size": "21470642176",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "name": "ceph_lv2",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "tags": {
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.cluster_name": "ceph",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.crush_device_class": "",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.encrypted": "0",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.osd_id": "2",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.type": "block",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:                 "ceph.vdo": "0"
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             },
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "type": "block",
Oct 11 04:40:38 compute-0 happy_snyder[204534]:             "vg_name": "ceph_vg2"
Oct 11 04:40:38 compute-0 happy_snyder[204534]:         }
Oct 11 04:40:38 compute-0 happy_snyder[204534]:     ]
Oct 11 04:40:38 compute-0 happy_snyder[204534]: }
Oct 11 04:40:38 compute-0 systemd[1]: libpod-0dca87829712465d4b402c745e130a2cb04062366de40b3794467c0291844558.scope: Deactivated successfully.
Oct 11 04:40:38 compute-0 podman[204484]: 2025-10-11 04:40:38.666445747 +0000 UTC m=+0.874672340 container died 0dca87829712465d4b402c745e130a2cb04062366de40b3794467c0291844558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:40:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-baf3d8f2e2352ef5079804f1bad65111ad784bf4013cafdda4df151564e00a43-merged.mount: Deactivated successfully.
Oct 11 04:40:38 compute-0 python3.9[204665]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760157637.5427372-554-203240237155636/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:38 compute-0 podman[204484]: 2025-10-11 04:40:38.727174879 +0000 UTC m=+0.935401402 container remove 0dca87829712465d4b402c745e130a2cb04062366de40b3794467c0291844558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_snyder, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:40:38 compute-0 systemd[1]: libpod-conmon-0dca87829712465d4b402c745e130a2cb04062366de40b3794467c0291844558.scope: Deactivated successfully.
Oct 11 04:40:38 compute-0 sudo[204663]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:38 compute-0 sudo[204158]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:38 compute-0 sudo[204684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:40:38 compute-0 sudo[204684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:40:38 compute-0 sudo[204684]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:38 compute-0 sudo[204732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:40:38 compute-0 sudo[204732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:40:38 compute-0 sudo[204732]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:38 compute-0 sudo[204780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:40:38 compute-0 sudo[204780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:40:38 compute-0 sudo[204780]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:39 compute-0 sudo[204828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:40:39 compute-0 sudo[204828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:40:39 compute-0 sudo[204959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifpooygxfzrpjgzrzxenupiftcycnqub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157638.9416656-554-238638732804049/AnsiballZ_stat.py'
Oct 11 04:40:39 compute-0 sudo[204959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:39 compute-0 podman[204976]: 2025-10-11 04:40:39.40994276 +0000 UTC m=+0.040964251 container create 4bb81ba0387349b7bbf44790eb0ac9fd7b3ac810b3fc8785596116350120dce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct 11 04:40:39 compute-0 systemd[1]: Started libpod-conmon-4bb81ba0387349b7bbf44790eb0ac9fd7b3ac810b3fc8785596116350120dce6.scope.
Oct 11 04:40:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:40:39 compute-0 podman[204976]: 2025-10-11 04:40:39.488397134 +0000 UTC m=+0.119418655 container init 4bb81ba0387349b7bbf44790eb0ac9fd7b3ac810b3fc8785596116350120dce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ardinghelli, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 04:40:39 compute-0 podman[204976]: 2025-10-11 04:40:39.395179022 +0000 UTC m=+0.026200553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:40:39 compute-0 podman[204976]: 2025-10-11 04:40:39.501442108 +0000 UTC m=+0.132463639 container start 4bb81ba0387349b7bbf44790eb0ac9fd7b3ac810b3fc8785596116350120dce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:40:39 compute-0 python3.9[204966]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:40:39 compute-0 elated_ardinghelli[204993]: 167 167
Oct 11 04:40:39 compute-0 systemd[1]: libpod-4bb81ba0387349b7bbf44790eb0ac9fd7b3ac810b3fc8785596116350120dce6.scope: Deactivated successfully.
Oct 11 04:40:39 compute-0 podman[204976]: 2025-10-11 04:40:39.507402087 +0000 UTC m=+0.138423588 container attach 4bb81ba0387349b7bbf44790eb0ac9fd7b3ac810b3fc8785596116350120dce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:40:39 compute-0 podman[204976]: 2025-10-11 04:40:39.507705934 +0000 UTC m=+0.138727435 container died 4bb81ba0387349b7bbf44790eb0ac9fd7b3ac810b3fc8785596116350120dce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:40:39 compute-0 ceph-mon[74243]: pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:39 compute-0 sudo[204959]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f65e2e91b4acda2cff80a520250633c1ba32a6c3846861ed4a06252b8e78d67-merged.mount: Deactivated successfully.
Oct 11 04:40:39 compute-0 podman[204976]: 2025-10-11 04:40:39.548606203 +0000 UTC m=+0.179627714 container remove 4bb81ba0387349b7bbf44790eb0ac9fd7b3ac810b3fc8785596116350120dce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ardinghelli, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:40:39 compute-0 systemd[1]: libpod-conmon-4bb81ba0387349b7bbf44790eb0ac9fd7b3ac810b3fc8785596116350120dce6.scope: Deactivated successfully.
Oct 11 04:40:39 compute-0 podman[205062]: 2025-10-11 04:40:39.712882913 +0000 UTC m=+0.039340110 container create 75acb599a84bef08a86de3ded5d0bfd6f8efe348f91f44d0bbce6df0f06358e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_tharp, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:40:39 compute-0 systemd[1]: Started libpod-conmon-75acb599a84bef08a86de3ded5d0bfd6f8efe348f91f44d0bbce6df0f06358e0.scope.
Oct 11 04:40:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:40:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2df12497dc79abc3fab5ae46e58ffab5bc232975cf230ee559aab932b88d654/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2df12497dc79abc3fab5ae46e58ffab5bc232975cf230ee559aab932b88d654/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2df12497dc79abc3fab5ae46e58ffab5bc232975cf230ee559aab932b88d654/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2df12497dc79abc3fab5ae46e58ffab5bc232975cf230ee559aab932b88d654/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:39 compute-0 podman[205062]: 2025-10-11 04:40:39.696538386 +0000 UTC m=+0.022995603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:40:39 compute-0 podman[205062]: 2025-10-11 04:40:39.793009488 +0000 UTC m=+0.119466705 container init 75acb599a84bef08a86de3ded5d0bfd6f8efe348f91f44d0bbce6df0f06358e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_tharp, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:40:39 compute-0 podman[205062]: 2025-10-11 04:40:39.802808872 +0000 UTC m=+0.129266069 container start 75acb599a84bef08a86de3ded5d0bfd6f8efe348f91f44d0bbce6df0f06358e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_tharp, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:40:39 compute-0 podman[205062]: 2025-10-11 04:40:39.806820012 +0000 UTC m=+0.133277239 container attach 75acb599a84bef08a86de3ded5d0bfd6f8efe348f91f44d0bbce6df0f06358e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_tharp, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:40:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:40:39 compute-0 sudo[205158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftnwgvyjptrezkqtjwdydemubjhflryx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157638.9416656-554-238638732804049/AnsiballZ_copy.py'
Oct 11 04:40:39 compute-0 sudo[205158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:40 compute-0 python3.9[205160]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760157638.9416656-554-238638732804049/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:40 compute-0 sudo[205158]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:40 compute-0 sudo[205327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sddcojncsffrgwnecolyneeseaxgvzmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157640.340164-554-130087630706080/AnsiballZ_stat.py'
Oct 11 04:40:40 compute-0 sudo[205327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:40 compute-0 interesting_tharp[205103]: {
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:         "osd_id": 1,
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:         "type": "bluestore"
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:     },
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:         "osd_id": 0,
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:         "type": "bluestore"
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:     },
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:         "osd_id": 2,
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:         "type": "bluestore"
Oct 11 04:40:40 compute-0 interesting_tharp[205103]:     }
Oct 11 04:40:40 compute-0 interesting_tharp[205103]: }
Oct 11 04:40:40 compute-0 systemd[1]: libpod-75acb599a84bef08a86de3ded5d0bfd6f8efe348f91f44d0bbce6df0f06358e0.scope: Deactivated successfully.
Oct 11 04:40:40 compute-0 podman[205062]: 2025-10-11 04:40:40.806062804 +0000 UTC m=+1.132520031 container died 75acb599a84bef08a86de3ded5d0bfd6f8efe348f91f44d0bbce6df0f06358e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:40:40 compute-0 systemd[1]: libpod-75acb599a84bef08a86de3ded5d0bfd6f8efe348f91f44d0bbce6df0f06358e0.scope: Consumed 1.004s CPU time.
Oct 11 04:40:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2df12497dc79abc3fab5ae46e58ffab5bc232975cf230ee559aab932b88d654-merged.mount: Deactivated successfully.
Oct 11 04:40:40 compute-0 python3.9[205329]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:40:40 compute-0 podman[205062]: 2025-10-11 04:40:40.85811097 +0000 UTC m=+1.184568167 container remove 75acb599a84bef08a86de3ded5d0bfd6f8efe348f91f44d0bbce6df0f06358e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:40:40 compute-0 systemd[1]: libpod-conmon-75acb599a84bef08a86de3ded5d0bfd6f8efe348f91f44d0bbce6df0f06358e0.scope: Deactivated successfully.
Oct 11 04:40:40 compute-0 sudo[205327]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:40 compute-0 sudo[204828]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:40:40 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:40:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:40:40 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:40:40 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev e2267983-b977-42e3-bcf4-f88b9a6fcdc4 does not exist
Oct 11 04:40:40 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 7e98b2e0-e06f-4883-80fd-f7085145b963 does not exist
Oct 11 04:40:40 compute-0 sudo[205354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:40:40 compute-0 sudo[205354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:40:40 compute-0 sudo[205354]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:41 compute-0 sudo[205402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:40:41 compute-0 sudo[205402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:40:41 compute-0 sudo[205402]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:41 compute-0 sudo[205524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nircoxnvzcsseqmepfshzrybgdidxgaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157640.340164-554-130087630706080/AnsiballZ_copy.py'
Oct 11 04:40:41 compute-0 sudo[205524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:41 compute-0 ceph-mon[74243]: pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:41 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:40:41 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:40:41 compute-0 python3.9[205526]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760157640.340164-554-130087630706080/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:41 compute-0 sudo[205524]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:41 compute-0 sshd-session[205527]: Connection closed by 221.159.21.170 port 51954
Oct 11 04:40:42 compute-0 sudo[205679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iinwuihelsrrhvxprwnpskfrexnuschw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157641.7734964-667-74714288962964/AnsiballZ_command.py'
Oct 11 04:40:42 compute-0 sudo[205679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:42 compute-0 python3.9[205681]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 11 04:40:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:42 compute-0 sudo[205679]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:42 compute-0 podman[205682]: 2025-10-11 04:40:42.461168325 +0000 UTC m=+0.105206210 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 04:40:42 compute-0 sudo[205851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kleejkiezkmldnqrrrnlejhsoayhdwzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157642.6541076-676-84863212557587/AnsiballZ_file.py'
Oct 11 04:40:42 compute-0 sudo[205851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:43 compute-0 python3.9[205853]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:43 compute-0 sudo[205851]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:43 compute-0 ceph-mon[74243]: pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:43 compute-0 sudo[206003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvbtvyhcdrbooahamltdgthikafvgjdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157643.4110951-676-269583621412277/AnsiballZ_file.py'
Oct 11 04:40:43 compute-0 sudo[206003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:43 compute-0 python3.9[206005]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:43 compute-0 sudo[206003]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:44 compute-0 sudo[206155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hohqrvbommvtmuhwosqoxglryxrwqnpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157644.0900707-676-175259534515929/AnsiballZ_file.py'
Oct 11 04:40:44 compute-0 sudo[206155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:44 compute-0 python3.9[206157]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:44 compute-0 sudo[206155]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:40:45 compute-0 sudo[206307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkfjzkxiorpjbntrbubdchnoeepyozzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157644.8012254-676-103192850783586/AnsiballZ_file.py'
Oct 11 04:40:45 compute-0 sudo[206307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:45 compute-0 python3.9[206309]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:45 compute-0 sudo[206307]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:45 compute-0 ceph-mon[74243]: pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:45 compute-0 sudo[206459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvqgnwyzbdwfspjhhrfrjocgungiorrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157645.4146338-676-219203569026554/AnsiballZ_file.py'
Oct 11 04:40:45 compute-0 sudo[206459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:45 compute-0 unix_chkpwd[206462]: password check failed for user (root)
Oct 11 04:40:45 compute-0 sshd-session[205627]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.159.21.170  user=root
Oct 11 04:40:45 compute-0 python3.9[206461]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:45 compute-0 sudo[206459]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:46 compute-0 sudo[206612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caqsbdkzlsggozhbpyormgzfwjibefhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157646.0602133-676-74672558927278/AnsiballZ_file.py'
Oct 11 04:40:46 compute-0 sudo[206612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:46 compute-0 python3.9[206614]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:46 compute-0 sudo[206612]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:47 compute-0 sudo[206764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgfgeejamofvbbaiphccaktdfoahdchz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157646.7952814-676-186900503813431/AnsiballZ_file.py'
Oct 11 04:40:47 compute-0 sudo[206764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:47 compute-0 python3.9[206766]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:47 compute-0 sudo[206764]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:47 compute-0 ceph-mon[74243]: pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:47 compute-0 sudo[206916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvikvfbigcjfpyubcnimfvzdhrxvrjno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157647.5280888-676-30572230715894/AnsiballZ_file.py'
Oct 11 04:40:47 compute-0 sudo[206916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:48 compute-0 sshd-session[205627]: Failed password for root from 221.159.21.170 port 52044 ssh2
Oct 11 04:40:48 compute-0 python3.9[206918]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:48 compute-0 sudo[206916]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:48 compute-0 sudo[207068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtqszwlqbhypkkmhnisrtkcmnreoashm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157648.2411258-676-141875957650632/AnsiballZ_file.py'
Oct 11 04:40:48 compute-0 sudo[207068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:48 compute-0 python3.9[207070]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:48 compute-0 sudo[207068]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:49 compute-0 sudo[207220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovqyofeiwjmdtrwxffgrudxxjwirlfnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157648.9935598-676-216844411067417/AnsiballZ_file.py'
Oct 11 04:40:49 compute-0 sudo[207220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:49 compute-0 python3.9[207222]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:49 compute-0 ceph-mon[74243]: pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:49 compute-0 sudo[207220]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:40:50 compute-0 sudo[207372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoylciqmgswvyfqwnqgtlcpizkntcbew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157649.7278874-676-45541902460041/AnsiballZ_file.py'
Oct 11 04:40:50 compute-0 sudo[207372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:50 compute-0 sshd-session[205627]: Connection closed by authenticating user root 221.159.21.170 port 52044 [preauth]
Oct 11 04:40:50 compute-0 python3.9[207374]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:50 compute-0 sudo[207372]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:50 compute-0 sudo[207526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkviwwnbfjpfyasuaaxjaixkcvxyztue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157650.4056952-676-235908958151607/AnsiballZ_file.py'
Oct 11 04:40:50 compute-0 sudo[207526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:51 compute-0 python3.9[207528]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:51 compute-0 sudo[207526]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:51 compute-0 sudo[207678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgwbfcgeptyshaterjgwsbeetmolkxfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157651.2037258-676-23173438377443/AnsiballZ_file.py'
Oct 11 04:40:51 compute-0 sudo[207678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:51 compute-0 ceph-mon[74243]: pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:51 compute-0 sshd-session[207422]: Invalid user admin from 221.159.21.170 port 53662
Oct 11 04:40:51 compute-0 python3.9[207680]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:51 compute-0 sudo[207678]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:51 compute-0 sshd-session[207422]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 04:40:51 compute-0 sshd-session[207422]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.159.21.170
Oct 11 04:40:52 compute-0 sudo[207830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibyklfihxignvlmxpaiglgnxzsmesmbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157651.946346-676-141190256714500/AnsiballZ_file.py'
Oct 11 04:40:52 compute-0 sudo[207830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:52 compute-0 python3.9[207832]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:52 compute-0 sudo[207830]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:53 compute-0 sudo[207982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwhhtttumukjrimcoxoigbwvsxmpsseu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157652.754136-775-69356404764740/AnsiballZ_stat.py'
Oct 11 04:40:53 compute-0 sudo[207982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:53 compute-0 python3.9[207984]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:40:53 compute-0 sudo[207982]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:53 compute-0 ceph-mon[74243]: pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:53 compute-0 sudo[208105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oelmdunznrbzbtkebxicvmvltkdnhhyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157652.754136-775-69356404764740/AnsiballZ_copy.py'
Oct 11 04:40:53 compute-0 sudo[208105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:54 compute-0 python3.9[208107]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157652.754136-775-69356404764740/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:54 compute-0 sudo[208105]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:54 compute-0 sshd-session[207422]: Failed password for invalid user admin from 221.159.21.170 port 53662 ssh2
Oct 11 04:40:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:54 compute-0 sudo[208257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roxmvvjjkdnygdsginftiwqeegijosvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157654.2561717-775-271335585692753/AnsiballZ_stat.py'
Oct 11 04:40:54 compute-0 sudo[208257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:54 compute-0 python3.9[208259]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:40:54 compute-0 sudo[208257]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:40:55 compute-0 sshd-session[207422]: Connection closed by invalid user admin 221.159.21.170 port 53662 [preauth]
Oct 11 04:40:55 compute-0 sudo[208380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcbtpckdizyqasdtsqakjlhxixayvaga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157654.2561717-775-271335585692753/AnsiballZ_copy.py'
Oct 11 04:40:55 compute-0 sudo[208380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:55 compute-0 python3.9[208382]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157654.2561717-775-271335585692753/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:55 compute-0 sudo[208380]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:55 compute-0 ceph-mon[74243]: pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:56 compute-0 sudo[208534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljcnbykfnouyxnapqnnicgrehgjrjjnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157655.705009-775-196180960742897/AnsiballZ_stat.py'
Oct 11 04:40:56 compute-0 sudo[208534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:40:56
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'images', 'cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.control', 'vms', 'default.rgw.log']
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:40:56 compute-0 python3.9[208536]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:40:56 compute-0 sudo[208534]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:56 compute-0 ceph-mon[74243]: pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:56 compute-0 sudo[208657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exrpcvrpkwltffqutbfewnkwleassvjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157655.705009-775-196180960742897/AnsiballZ_copy.py'
Oct 11 04:40:56 compute-0 sudo[208657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:56 compute-0 python3.9[208659]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157655.705009-775-196180960742897/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:56 compute-0 sudo[208657]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:57 compute-0 sudo[208809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwpwhbqycwokjazujjwfaqeiwfsgkmqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157657.1078374-775-243711797774252/AnsiballZ_stat.py'
Oct 11 04:40:57 compute-0 sudo[208809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:57 compute-0 python3.9[208811]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:40:57 compute-0 sudo[208809]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:58 compute-0 sudo[208932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvpuxjjabmjpdizwvhjdhdlkwkfkevlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157657.1078374-775-243711797774252/AnsiballZ_copy.py'
Oct 11 04:40:58 compute-0 sudo[208932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:58 compute-0 python3.9[208934]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157657.1078374-775-243711797774252/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:58 compute-0 sudo[208932]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:58 compute-0 sudo[209084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yezuvcplnebrkjbujqkvyqnqvkfimlce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157658.5443707-775-21378443715162/AnsiballZ_stat.py'
Oct 11 04:40:58 compute-0 sudo[209084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:59 compute-0 unix_chkpwd[209088]: password check failed for user (root)
Oct 11 04:40:59 compute-0 sshd-session[208383]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.159.21.170  user=root
Oct 11 04:40:59 compute-0 python3.9[209086]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:40:59 compute-0 sudo[209084]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:59 compute-0 ceph-mon[74243]: pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:59 compute-0 sudo[209209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtzlejdamdfvwqtymqousfbbutbqpccy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157658.5443707-775-21378443715162/AnsiballZ_copy.py'
Oct 11 04:40:59 compute-0 sudo[209209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:40:59 compute-0 python3.9[209211]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157658.5443707-775-21378443715162/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:40:59 compute-0 sudo[209209]: pam_unix(sudo:session): session closed for user root
Oct 11 04:40:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:00 compute-0 sudo[209361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srkzucfhoippfbxfwkfbzjmuhrurdish ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157660.0288963-775-96778735195794/AnsiballZ_stat.py'
Oct 11 04:41:00 compute-0 sudo[209361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:00 compute-0 python3.9[209363]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:41:00 compute-0 sudo[209361]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:00 compute-0 sshd-session[208383]: Failed password for root from 221.159.21.170 port 54504 ssh2
Oct 11 04:41:01 compute-0 sudo[209484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbhfdeakbinqyvztjktnzjrynnvfpwjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157660.0288963-775-96778735195794/AnsiballZ_copy.py'
Oct 11 04:41:01 compute-0 sudo[209484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:01 compute-0 python3.9[209486]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157660.0288963-775-96778735195794/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:01 compute-0 sudo[209484]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:01 compute-0 ceph-mon[74243]: pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:01 compute-0 sshd-session[208383]: Connection closed by authenticating user root 221.159.21.170 port 54504 [preauth]
Oct 11 04:41:01 compute-0 sudo[209637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsubbcydyqmxriacfuliijjqmqsfthlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157661.4939888-775-18665938715135/AnsiballZ_stat.py'
Oct 11 04:41:01 compute-0 sudo[209637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:02 compute-0 python3.9[209639]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:41:02 compute-0 sudo[209637]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:02 compute-0 sudo[209760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxmqrfmxvljastknwubxvnxggstcqllt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157661.4939888-775-18665938715135/AnsiballZ_copy.py'
Oct 11 04:41:02 compute-0 sudo[209760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:02 compute-0 python3.9[209762]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157661.4939888-775-18665938715135/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:02 compute-0 sudo[209760]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:03 compute-0 sudo[209912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueavtptgcvjeswkmmeiqlyjkwejvvxyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157662.7935605-775-135745886148940/AnsiballZ_stat.py'
Oct 11 04:41:03 compute-0 sudo[209912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:03 compute-0 python3.9[209914]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:41:03 compute-0 sudo[209912]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:03 compute-0 ceph-mon[74243]: pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:03 compute-0 sudo[210036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndvhresrsbfrrnzfytwbftecmoabuebz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157662.7935605-775-135745886148940/AnsiballZ_copy.py'
Oct 11 04:41:03 compute-0 sudo[210036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:03 compute-0 python3.9[210038]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157662.7935605-775-135745886148940/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:03 compute-0 sudo[210036]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:03 compute-0 sshd-session[209609]: Invalid user es from 221.159.21.170 port 55610
Oct 11 04:41:04 compute-0 sshd-session[209609]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 04:41:04 compute-0 sshd-session[209609]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.159.21.170
Oct 11 04:41:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:04 compute-0 sudo[210188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svrgpfrimityjogqyqcvcatxqsgwcsxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157664.0758138-775-199699768346304/AnsiballZ_stat.py'
Oct 11 04:41:04 compute-0 sudo[210188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:04 compute-0 python3.9[210190]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:41:04 compute-0 sudo[210188]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:05 compute-0 sudo[210311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrkdjeszmsvjufudblsjiyzidydfpgna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157664.0758138-775-199699768346304/AnsiballZ_copy.py'
Oct 11 04:41:05 compute-0 sudo[210311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:05 compute-0 python3.9[210313]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157664.0758138-775-199699768346304/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:05 compute-0 sudo[210311]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:05 compute-0 ceph-mon[74243]: pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:05 compute-0 sudo[210463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iztjkujhnklvutzxhbnmdzcxpmihzuni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157665.4048905-775-150924698186569/AnsiballZ_stat.py'
Oct 11 04:41:05 compute-0 sudo[210463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:05 compute-0 python3.9[210465]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:41:05 compute-0 sudo[210463]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:06 compute-0 sshd-session[209609]: Failed password for invalid user es from 221.159.21.170 port 55610 ssh2
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:41:06 compute-0 sudo[210586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zazueeouxzzimfkjkqjlezbvfidsfgan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157665.4048905-775-150924698186569/AnsiballZ_copy.py'
Oct 11 04:41:06 compute-0 sudo[210586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:06 compute-0 python3.9[210588]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157665.4048905-775-150924698186569/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:06 compute-0 sudo[210586]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:07 compute-0 sudo[210738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxcfvszyziklexvaowfwzaefryzfavev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157666.6950114-775-195374167991848/AnsiballZ_stat.py'
Oct 11 04:41:07 compute-0 sudo[210738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:07 compute-0 python3.9[210740]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:41:07 compute-0 sudo[210738]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:07 compute-0 ceph-mon[74243]: pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:07 compute-0 podman[210764]: 2025-10-11 04:41:07.546422087 +0000 UTC m=+0.190418246 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller)
Oct 11 04:41:07 compute-0 sshd-session[209609]: Connection closed by invalid user es 221.159.21.170 port 55610 [preauth]
Oct 11 04:41:07 compute-0 sudo[210887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwljedvkyygagwqrgwqzhlixpgebkmik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157666.6950114-775-195374167991848/AnsiballZ_copy.py'
Oct 11 04:41:07 compute-0 sudo[210887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:07 compute-0 python3.9[210889]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157666.6950114-775-195374167991848/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:07 compute-0 sudo[210887]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:08 compute-0 sudo[211040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olaohgedxjznwujwgthyziajlynkpwku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157668.0868173-775-225930643156151/AnsiballZ_stat.py'
Oct 11 04:41:08 compute-0 sudo[211040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:08 compute-0 python3.9[211042]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:41:08 compute-0 sudo[211040]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:09 compute-0 sudo[211163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciuezeuxhgcymqmgvvdhhrgfnajcqkak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157668.0868173-775-225930643156151/AnsiballZ_copy.py'
Oct 11 04:41:09 compute-0 sudo[211163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:09 compute-0 python3.9[211165]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157668.0868173-775-225930643156151/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:09 compute-0 sudo[211163]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:09 compute-0 ceph-mon[74243]: pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:09 compute-0 sudo[211316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbdrptoktyzrgsxijebvsfyqaiebreur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157669.4449728-775-179748738942796/AnsiballZ_stat.py'
Oct 11 04:41:09 compute-0 sudo[211316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:09 compute-0 python3.9[211318]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:41:09 compute-0 sudo[211316]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:10 compute-0 sudo[211439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agnkeaiwntnholhgelcovgcprcqjxfqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157669.4449728-775-179748738942796/AnsiballZ_copy.py'
Oct 11 04:41:10 compute-0 sudo[211439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:10 compute-0 python3.9[211441]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157669.4449728-775-179748738942796/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:10 compute-0 sudo[211439]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:41:11.002 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:41:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:41:11.002 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:41:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:41:11.002 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:41:11 compute-0 sudo[211591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdyptkoufvqrvyrfyadukzlvippmeums ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157670.743134-775-127448883360730/AnsiballZ_stat.py'
Oct 11 04:41:11 compute-0 sudo[211591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:11 compute-0 python3.9[211593]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:41:11 compute-0 sudo[211591]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:11 compute-0 ceph-mon[74243]: pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:11 compute-0 sudo[211714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnbpxhtrtyxdvhutxmgjutbposddpudq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157670.743134-775-127448883360730/AnsiballZ_copy.py'
Oct 11 04:41:11 compute-0 sudo[211714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:11 compute-0 unix_chkpwd[211717]: password check failed for user (root)
Oct 11 04:41:11 compute-0 sshd-session[210890]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.159.21.170  user=root
Oct 11 04:41:11 compute-0 python3.9[211716]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157670.743134-775-127448883360730/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:11 compute-0 sudo[211714]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:12 compute-0 python3.9[211867]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:41:13 compute-0 sshd-session[210890]: Failed password for root from 221.159.21.170 port 56684 ssh2
Oct 11 04:41:13 compute-0 podman[211971]: 2025-10-11 04:41:13.413729971 +0000 UTC m=+0.074863825 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 04:41:13 compute-0 sudo[212040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lincnghymlunornangcscpywctzozcqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157672.8816664-981-193356496985772/AnsiballZ_seboolean.py'
Oct 11 04:41:13 compute-0 sudo[212040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:13 compute-0 ceph-mon[74243]: pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:13 compute-0 python3.9[212042]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 11 04:41:14 compute-0 sshd-session[210890]: Connection closed by authenticating user root 221.159.21.170 port 56684 [preauth]
Oct 11 04:41:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:14 compute-0 sudo[212040]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:15 compute-0 sudo[212199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwqthshunqabfbensnhmwjjeyawwultv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157675.1719642-989-180413301863606/AnsiballZ_copy.py'
Oct 11 04:41:15 compute-0 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct 11 04:41:15 compute-0 sudo[212199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:15 compute-0 ceph-mon[74243]: pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:15 compute-0 python3.9[212201]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:15 compute-0 sudo[212199]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:15 compute-0 sshd-session[212044]: Invalid user git from 221.159.21.170 port 57768
Oct 11 04:41:16 compute-0 sudo[212351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvonycehesbdpttdhenbftkfiuzsjmfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157675.9019156-989-118433078899134/AnsiballZ_copy.py'
Oct 11 04:41:16 compute-0 sudo[212351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:16 compute-0 sshd-session[212044]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 04:41:16 compute-0 sshd-session[212044]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.159.21.170
Oct 11 04:41:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:16 compute-0 python3.9[212353]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:16 compute-0 sudo[212351]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:16 compute-0 ceph-mon[74243]: pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:16 compute-0 sudo[212503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcjsjelvlbjejguatakkjyxrhepqjmkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157676.6340108-989-48097576299814/AnsiballZ_copy.py'
Oct 11 04:41:16 compute-0 sudo[212503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:17 compute-0 python3.9[212505]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:17 compute-0 sudo[212503]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:17 compute-0 sudo[212655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xipgylufztbvhazgrtbiercacpkucexy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157677.4029298-989-147572536810393/AnsiballZ_copy.py'
Oct 11 04:41:17 compute-0 sudo[212655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:17 compute-0 python3.9[212657]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:17 compute-0 sudo[212655]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:18 compute-0 sshd-session[212044]: Failed password for invalid user git from 221.159.21.170 port 57768 ssh2
Oct 11 04:41:18 compute-0 sudo[212807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjyososcatlclzodghuppaiuufpkjgbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157678.1491718-989-46220767701333/AnsiballZ_copy.py'
Oct 11 04:41:18 compute-0 sudo[212807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:18 compute-0 python3.9[212809]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:18 compute-0 sudo[212807]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:19 compute-0 sudo[212959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwwhxegsltyxtotiqpubbyzxsrlpycco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157679.0111985-1025-153538624405264/AnsiballZ_copy.py'
Oct 11 04:41:19 compute-0 sudo[212959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:19 compute-0 ceph-mon[74243]: pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:19 compute-0 python3.9[212961]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:19 compute-0 sudo[212959]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:19 compute-0 sshd-session[212044]: Connection closed by invalid user git 221.159.21.170 port 57768 [preauth]
Oct 11 04:41:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:20 compute-0 sudo[213111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajgpybxjosngxfrndejunjiiynlwvmgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157679.7439156-1025-72554892483743/AnsiballZ_copy.py'
Oct 11 04:41:20 compute-0 sudo[213111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:20 compute-0 python3.9[213113]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:20 compute-0 sudo[213111]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:20 compute-0 sudo[213263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zauatswipogtwpcbbznwmirjqjydueko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157680.481225-1025-113401779408476/AnsiballZ_copy.py'
Oct 11 04:41:20 compute-0 sudo[213263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:21 compute-0 python3.9[213265]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:21 compute-0 sudo[213263]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:21 compute-0 ceph-mon[74243]: pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:21 compute-0 sudo[213417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghxpohpbbqcnddiudjclwcqvaghuzxnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157681.3266244-1025-277512592688766/AnsiballZ_copy.py'
Oct 11 04:41:21 compute-0 sudo[213417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:21 compute-0 python3.9[213419]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:21 compute-0 sudo[213417]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:22 compute-0 sudo[213569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyjvekdfgclzwgcnsdfgkabykrjjxmtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157682.1366067-1025-188909357340588/AnsiballZ_copy.py'
Oct 11 04:41:22 compute-0 sudo[213569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:22 compute-0 python3.9[213571]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:22 compute-0 sudo[213569]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:22 compute-0 sshd-session[213266]: Invalid user hadoop from 221.159.21.170 port 58726
Oct 11 04:41:23 compute-0 sshd-session[213266]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 04:41:23 compute-0 sshd-session[213266]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.159.21.170
Oct 11 04:41:23 compute-0 sudo[213721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvzzuwzcluceagwpffxczqbaxklyeapk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157682.9948392-1061-56127998782882/AnsiballZ_systemd.py'
Oct 11 04:41:23 compute-0 sudo[213721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:23 compute-0 ceph-mon[74243]: pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:23 compute-0 python3.9[213723]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:41:23 compute-0 systemd[1]: Reloading.
Oct 11 04:41:23 compute-0 systemd-rc-local-generator[213746]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:41:23 compute-0 systemd-sysv-generator[213751]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:41:24 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Oct 11 04:41:24 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Oct 11 04:41:24 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 11 04:41:24 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 11 04:41:24 compute-0 systemd[1]: Starting libvirt logging daemon...
Oct 11 04:41:24 compute-0 systemd[1]: Started libvirt logging daemon.
Oct 11 04:41:24 compute-0 sudo[213721]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:24 compute-0 sshd-session[213266]: Failed password for invalid user hadoop from 221.159.21.170 port 58726 ssh2
Oct 11 04:41:24 compute-0 sudo[213914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzqylokcaikffspedecmllhjfotpubrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157684.352081-1061-141023744313562/AnsiballZ_systemd.py'
Oct 11 04:41:24 compute-0 sudo[213914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:25 compute-0 python3.9[213916]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:41:25 compute-0 systemd[1]: Reloading.
Oct 11 04:41:25 compute-0 systemd-rc-local-generator[213945]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:41:25 compute-0 systemd-sysv-generator[213951]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:41:25 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 11 04:41:25 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 11 04:41:25 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 11 04:41:25 compute-0 ceph-mon[74243]: pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:25 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 11 04:41:25 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 11 04:41:25 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 11 04:41:25 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 11 04:41:25 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 11 04:41:25 compute-0 sudo[213914]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:26 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 11 04:41:26 compute-0 sudo[214131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwvltukkgjfvwoeprbbgsfshwrvkzzrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157685.741685-1061-173454028721521/AnsiballZ_systemd.py'
Oct 11 04:41:26 compute-0 sudo[214131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:41:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:41:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:41:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:41:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:41:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:41:26 compute-0 sshd-session[213266]: Connection closed by invalid user hadoop 221.159.21.170 port 58726 [preauth]
Oct 11 04:41:26 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 11 04:41:26 compute-0 python3.9[214133]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:41:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:26 compute-0 systemd[1]: Reloading.
Oct 11 04:41:26 compute-0 systemd-sysv-generator[214164]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:41:26 compute-0 systemd-rc-local-generator[214161]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:41:26 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 11 04:41:26 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 11 04:41:26 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 11 04:41:26 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 11 04:41:26 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 11 04:41:26 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 11 04:41:26 compute-0 sudo[214131]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:26 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 11 04:41:26 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 11 04:41:27 compute-0 sudo[214350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsoejneoapdwsjnkvmgylnyscdomcrue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157686.9790807-1061-40799275868268/AnsiballZ_systemd.py'
Oct 11 04:41:27 compute-0 sudo[214350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:27 compute-0 ceph-mon[74243]: pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:27 compute-0 python3.9[214352]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:41:27 compute-0 systemd[1]: Reloading.
Oct 11 04:41:27 compute-0 systemd-rc-local-generator[214378]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:41:27 compute-0 systemd-sysv-generator[214383]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:41:27 compute-0 setroubleshoot[214104]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l d8c8746b-f680-4097-bbf7-ede79ac5200e
Oct 11 04:41:27 compute-0 setroubleshoot[214104]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 11 04:41:27 compute-0 setroubleshoot[214104]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l d8c8746b-f680-4097-bbf7-ede79ac5200e
Oct 11 04:41:27 compute-0 setroubleshoot[214104]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 11 04:41:28 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Oct 11 04:41:28 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 11 04:41:28 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 11 04:41:28 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 11 04:41:28 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 11 04:41:28 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 11 04:41:28 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 11 04:41:28 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 11 04:41:28 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 11 04:41:28 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 11 04:41:28 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 11 04:41:28 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 11 04:41:28 compute-0 sudo[214350]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:28 compute-0 sudo[214565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzcyhfjojhbmawzgkkvkwgayryuezxwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157688.3469033-1061-67379952842828/AnsiballZ_systemd.py'
Oct 11 04:41:28 compute-0 sudo[214565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:29 compute-0 python3.9[214567]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:41:29 compute-0 systemd[1]: Reloading.
Oct 11 04:41:29 compute-0 systemd-rc-local-generator[214591]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:41:29 compute-0 systemd-sysv-generator[214598]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:41:29 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Oct 11 04:41:29 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Oct 11 04:41:29 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 11 04:41:29 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 11 04:41:29 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 11 04:41:29 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 11 04:41:29 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 11 04:41:29 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 11 04:41:29 compute-0 ceph-mon[74243]: pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:29 compute-0 sudo[214565]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:30 compute-0 sudo[214774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfqgtbnlxanhqorbihkvhwdvrcexyodb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157689.8488307-1098-11180493419915/AnsiballZ_file.py'
Oct 11 04:41:30 compute-0 sudo[214774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:30 compute-0 python3.9[214776]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:30 compute-0 sudo[214774]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:31 compute-0 sudo[214926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odgqyxawojsyngwqdfhbqgrxklzzhjzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157690.6798143-1106-226283795609146/AnsiballZ_find.py'
Oct 11 04:41:31 compute-0 sudo[214926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:31 compute-0 python3.9[214928]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 04:41:31 compute-0 sudo[214926]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:31 compute-0 ceph-mon[74243]: pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:31 compute-0 unix_chkpwd[215028]: password check failed for user (root)
Oct 11 04:41:31 compute-0 sshd-session[214135]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.159.21.170  user=root
Oct 11 04:41:31 compute-0 sudo[215079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzpuwdbbbfomdojeoamqngzjvaxxoobw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157691.506572-1114-276489113726240/AnsiballZ_command.py'
Oct 11 04:41:31 compute-0 sudo[215079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:32 compute-0 python3.9[215081]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:41:32 compute-0 sudo[215079]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:32 compute-0 ceph-mon[74243]: pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:32 compute-0 python3.9[215235]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 04:41:33 compute-0 python3.9[215385]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:41:33 compute-0 sshd-session[214135]: Failed password for root from 221.159.21.170 port 59942 ssh2
Oct 11 04:41:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:34 compute-0 python3.9[215506]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760157693.2708404-1133-276702627762854/.source.xml follow=False _original_basename=secret.xml.j2 checksum=9f7425df09ff522648d4862714474484442b8dbe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:35 compute-0 sudo[215656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eltjadlyfgobparigiiylekyuisrbmbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157694.661525-1148-204274026280256/AnsiballZ_command.py'
Oct 11 04:41:35 compute-0 sudo[215656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:35 compute-0 python3.9[215658]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 166d0489-2ae7-59eb-961c-c1b5cda4b45a
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:41:35 compute-0 polkitd[6176]: Registered Authentication Agent for unix-process:215660:387551 (system bus name :1.2985 [/usr/bin/pkttyagent --process 215660 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 11 04:41:35 compute-0 polkitd[6176]: Unregistered Authentication Agent for unix-process:215660:387551 (system bus name :1.2985, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 11 04:41:35 compute-0 polkitd[6176]: Registered Authentication Agent for unix-process:215659:387550 (system bus name :1.2986 [/usr/bin/pkttyagent --process 215659 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 11 04:41:35 compute-0 polkitd[6176]: Unregistered Authentication Agent for unix-process:215659:387550 (system bus name :1.2986, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 11 04:41:35 compute-0 sudo[215656]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:35 compute-0 ceph-mon[74243]: pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:36 compute-0 sshd-session[214135]: Connection closed by authenticating user root 221.159.21.170 port 59942 [preauth]
Oct 11 04:41:36 compute-0 python3.9[215820]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:36 compute-0 sudo[215972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmrcdpljnxnmnicfxahuuxekujnfsshd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157696.425777-1164-97218881427505/AnsiballZ_command.py'
Oct 11 04:41:36 compute-0 sudo[215972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:36 compute-0 sudo[215972]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:37 compute-0 ceph-mon[74243]: pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:37 compute-0 sudo[216125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztypsxwgtvhuyymhaujddakcgdiybjpt ; FSID=166d0489-2ae7-59eb-961c-c1b5cda4b45a KEY=AQAF3OloAAAAABAA2VyWzcR4rbz4VVd/gkSHkQ== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157697.2319145-1172-97114277705282/AnsiballZ_command.py'
Oct 11 04:41:37 compute-0 sudo[216125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:37 compute-0 podman[216127]: 2025-10-11 04:41:37.712540364 +0000 UTC m=+0.120081300 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 04:41:37 compute-0 polkitd[6176]: Registered Authentication Agent for unix-process:216154:387798 (system bus name :1.2989 [/usr/bin/pkttyagent --process 216154 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 11 04:41:37 compute-0 polkitd[6176]: Unregistered Authentication Agent for unix-process:216154:387798 (system bus name :1.2989, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 11 04:41:37 compute-0 sudo[216125]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:37 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 11 04:41:37 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.004s CPU time.
Oct 11 04:41:37 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 11 04:41:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:38 compute-0 sudo[216309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzilybjlihdpcxzttyjlqutfmxhljkqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157698.1311557-1180-109730027903968/AnsiballZ_copy.py'
Oct 11 04:41:38 compute-0 sudo[216309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:38 compute-0 python3.9[216311]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:38 compute-0 sudo[216309]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:39 compute-0 sudo[216461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxbklvmgbsuounxjysisbdcydgwcidiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157698.881168-1188-79214422627971/AnsiballZ_stat.py'
Oct 11 04:41:39 compute-0 sudo[216461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:39 compute-0 ceph-mon[74243]: pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:39 compute-0 python3.9[216463]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:41:39 compute-0 sudo[216461]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:39 compute-0 sudo[216584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxqfosytzirykjqrchzcmywwhqsfhfbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157698.881168-1188-79214422627971/AnsiballZ_copy.py'
Oct 11 04:41:39 compute-0 sudo[216584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:40 compute-0 python3.9[216586]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760157698.881168-1188-79214422627971/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:40 compute-0 sudo[216584]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:40 compute-0 sudo[216736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwteehfdesuygqyixrhnkmfgqpaizyjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157700.5119696-1204-160309539210466/AnsiballZ_file.py'
Oct 11 04:41:40 compute-0 sudo[216736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:41 compute-0 sudo[216739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:41:41 compute-0 sudo[216739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:41:41 compute-0 sudo[216739]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:41 compute-0 python3.9[216738]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:41 compute-0 sudo[216764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:41:41 compute-0 sudo[216764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:41:41 compute-0 sudo[216764]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:41 compute-0 sudo[216736]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:41 compute-0 sudo[216789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:41:41 compute-0 sudo[216789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:41:41 compute-0 sudo[216789]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:41 compute-0 sudo[216836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:41:41 compute-0 sudo[216836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:41:41 compute-0 ceph-mon[74243]: pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:41 compute-0 sudo[217008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tagvmhedysgrtkpufeqhyrxqypgmzaai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157701.3565314-1212-245341103180821/AnsiballZ_stat.py'
Oct 11 04:41:41 compute-0 sudo[217008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:41 compute-0 sudo[216836]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:41 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:41:41 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:41:41 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:41:41 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:41:41 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:41:41 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:41:41 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 0f544f37-b8c8-4252-983b-6d90d9fd63ae does not exist
Oct 11 04:41:41 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev d152886d-b746-4480-92e7-93e91866f53b does not exist
Oct 11 04:41:41 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 99c781bc-e035-49b7-991f-2f5bb346cb1f does not exist
Oct 11 04:41:41 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:41:41 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:41:41 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:41:41 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:41:41 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:41:41 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:41:41 compute-0 sudo[217023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:41:41 compute-0 sudo[217023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:41:41 compute-0 sudo[217023]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:41 compute-0 sudo[217048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:41:41 compute-0 sudo[217048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:41:41 compute-0 sudo[217048]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:41 compute-0 python3.9[217012]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:41:41 compute-0 sudo[217008]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:41 compute-0 sudo[217073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:41:41 compute-0 sudo[217073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:41:41 compute-0 sudo[217073]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:42 compute-0 sudo[217100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:41:42 compute-0 sudo[217100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:41:42 compute-0 sshd-session[215868]: Invalid user postgres from 221.159.21.170 port 33368
Oct 11 04:41:42 compute-0 sudo[217212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubaqfxgvglmscddkaypwudloibkhmhko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157701.3565314-1212-245341103180821/AnsiballZ_file.py'
Oct 11 04:41:42 compute-0 sudo[217212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:42 compute-0 podman[217242]: 2025-10-11 04:41:42.448569534 +0000 UTC m=+0.054442879 container create 9b00aa1c1f5bc87dfe2a787d5f97a99d031109796e426f8b6746c724b92331a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:41:42 compute-0 systemd[1]: Started libpod-conmon-9b00aa1c1f5bc87dfe2a787d5f97a99d031109796e426f8b6746c724b92331a3.scope.
Oct 11 04:41:42 compute-0 python3.9[217223]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:42 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:41:42 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:41:42 compute-0 sudo[217212]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:42 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:41:42 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:41:42 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:41:42 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:41:42 compute-0 podman[217242]: 2025-10-11 04:41:42.4221312 +0000 UTC m=+0.028004605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:41:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:41:42 compute-0 podman[217242]: 2025-10-11 04:41:42.547396695 +0000 UTC m=+0.153270020 container init 9b00aa1c1f5bc87dfe2a787d5f97a99d031109796e426f8b6746c724b92331a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:41:42 compute-0 podman[217242]: 2025-10-11 04:41:42.560575361 +0000 UTC m=+0.166448676 container start 9b00aa1c1f5bc87dfe2a787d5f97a99d031109796e426f8b6746c724b92331a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:41:42 compute-0 podman[217242]: 2025-10-11 04:41:42.564172625 +0000 UTC m=+0.170045940 container attach 9b00aa1c1f5bc87dfe2a787d5f97a99d031109796e426f8b6746c724b92331a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:41:42 compute-0 charming_archimedes[217260]: 167 167
Oct 11 04:41:42 compute-0 systemd[1]: libpod-9b00aa1c1f5bc87dfe2a787d5f97a99d031109796e426f8b6746c724b92331a3.scope: Deactivated successfully.
Oct 11 04:41:42 compute-0 podman[217242]: 2025-10-11 04:41:42.567620006 +0000 UTC m=+0.173493351 container died 9b00aa1c1f5bc87dfe2a787d5f97a99d031109796e426f8b6746c724b92331a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:41:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-57f88ea978c09d7f7d4cd09ed869150d7550215e2d90cd3b02ea50c7ea0b67b1-merged.mount: Deactivated successfully.
Oct 11 04:41:42 compute-0 podman[217242]: 2025-10-11 04:41:42.6112452 +0000 UTC m=+0.217118545 container remove 9b00aa1c1f5bc87dfe2a787d5f97a99d031109796e426f8b6746c724b92331a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:41:42 compute-0 systemd[1]: libpod-conmon-9b00aa1c1f5bc87dfe2a787d5f97a99d031109796e426f8b6746c724b92331a3.scope: Deactivated successfully.
Oct 11 04:41:42 compute-0 podman[217328]: 2025-10-11 04:41:42.798408589 +0000 UTC m=+0.057718685 container create ed2dfa267f425b43832ab14ad54e5a1bae5b43533cd8ac956405570c9331883c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:41:42 compute-0 sshd-session[215868]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 04:41:42 compute-0 sshd-session[215868]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.159.21.170
Oct 11 04:41:42 compute-0 systemd[1]: Started libpod-conmon-ed2dfa267f425b43832ab14ad54e5a1bae5b43533cd8ac956405570c9331883c.scope.
Oct 11 04:41:42 compute-0 podman[217328]: 2025-10-11 04:41:42.775091728 +0000 UTC m=+0.034401824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:41:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3098035e3d683dcd54d11439e91cf037cea1336725c6e2e38d48a4e07d62a318/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3098035e3d683dcd54d11439e91cf037cea1336725c6e2e38d48a4e07d62a318/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3098035e3d683dcd54d11439e91cf037cea1336725c6e2e38d48a4e07d62a318/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3098035e3d683dcd54d11439e91cf037cea1336725c6e2e38d48a4e07d62a318/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3098035e3d683dcd54d11439e91cf037cea1336725c6e2e38d48a4e07d62a318/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:42 compute-0 podman[217328]: 2025-10-11 04:41:42.911949328 +0000 UTC m=+0.171259464 container init ed2dfa267f425b43832ab14ad54e5a1bae5b43533cd8ac956405570c9331883c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brown, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:41:42 compute-0 podman[217328]: 2025-10-11 04:41:42.924282501 +0000 UTC m=+0.183592587 container start ed2dfa267f425b43832ab14ad54e5a1bae5b43533cd8ac956405570c9331883c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:41:42 compute-0 podman[217328]: 2025-10-11 04:41:42.929159439 +0000 UTC m=+0.188469575 container attach ed2dfa267f425b43832ab14ad54e5a1bae5b43533cd8ac956405570c9331883c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:41:43 compute-0 sudo[217455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpcgwxwljlfzktmofpoezqgdtnbghsio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157702.7332456-1224-5877797827077/AnsiballZ_stat.py'
Oct 11 04:41:43 compute-0 sudo[217455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:43 compute-0 python3.9[217457]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:41:43 compute-0 sudo[217455]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:43 compute-0 ceph-mon[74243]: pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:43 compute-0 sudo[217555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acwtepzpnolhxkifaxavmptedjpxtcob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157702.7332456-1224-5877797827077/AnsiballZ_file.py'
Oct 11 04:41:43 compute-0 sudo[217555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:43 compute-0 podman[217515]: 2025-10-11 04:41:43.720507457 +0000 UTC m=+0.075441060 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 11 04:41:43 compute-0 hardcore_brown[217377]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:41:43 compute-0 hardcore_brown[217377]: --> relative data size: 1.0
Oct 11 04:41:43 compute-0 hardcore_brown[217377]: --> All data devices are unavailable
Oct 11 04:41:43 compute-0 python3.9[217562]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.pwqsq3qd recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:43 compute-0 systemd[1]: libpod-ed2dfa267f425b43832ab14ad54e5a1bae5b43533cd8ac956405570c9331883c.scope: Deactivated successfully.
Oct 11 04:41:43 compute-0 podman[217328]: 2025-10-11 04:41:43.910721156 +0000 UTC m=+1.170031212 container died ed2dfa267f425b43832ab14ad54e5a1bae5b43533cd8ac956405570c9331883c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brown, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:41:43 compute-0 sudo[217555]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3098035e3d683dcd54d11439e91cf037cea1336725c6e2e38d48a4e07d62a318-merged.mount: Deactivated successfully.
Oct 11 04:41:43 compute-0 podman[217328]: 2025-10-11 04:41:43.969429156 +0000 UTC m=+1.228739212 container remove ed2dfa267f425b43832ab14ad54e5a1bae5b43533cd8ac956405570c9331883c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brown, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:41:43 compute-0 systemd[1]: libpod-conmon-ed2dfa267f425b43832ab14ad54e5a1bae5b43533cd8ac956405570c9331883c.scope: Deactivated successfully.
Oct 11 04:41:44 compute-0 sudo[217100]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:44 compute-0 sudo[217616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:41:44 compute-0 sudo[217616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:41:44 compute-0 sudo[217616]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:44 compute-0 sudo[217649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:41:44 compute-0 sudo[217649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:41:44 compute-0 sudo[217649]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:44 compute-0 sudo[217696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:41:44 compute-0 sudo[217696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:41:44 compute-0 sudo[217696]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:44 compute-0 sudo[217743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:41:44 compute-0 sudo[217743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:41:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:44 compute-0 sudo[217851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uarxrsojgbdijxupylqbjysaagwuirau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157704.11931-1236-175115291986128/AnsiballZ_stat.py'
Oct 11 04:41:44 compute-0 sudo[217851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:44 compute-0 python3.9[217855]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:41:44 compute-0 podman[217882]: 2025-10-11 04:41:44.650135882 +0000 UTC m=+0.051960794 container create 2658945bed4f5fcd53e171f3ae1903c108f1e91f98a60f20dfc033fec6370d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_khayyam, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:41:44 compute-0 systemd[1]: Started libpod-conmon-2658945bed4f5fcd53e171f3ae1903c108f1e91f98a60f20dfc033fec6370d17.scope.
Oct 11 04:41:44 compute-0 sudo[217851]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:44 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:41:44 compute-0 podman[217882]: 2025-10-11 04:41:44.6211061 +0000 UTC m=+0.022930982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:41:44 compute-0 podman[217882]: 2025-10-11 04:41:44.729376761 +0000 UTC m=+0.131201723 container init 2658945bed4f5fcd53e171f3ae1903c108f1e91f98a60f20dfc033fec6370d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:41:44 compute-0 podman[217882]: 2025-10-11 04:41:44.738024867 +0000 UTC m=+0.139849779 container start 2658945bed4f5fcd53e171f3ae1903c108f1e91f98a60f20dfc033fec6370d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:41:44 compute-0 youthful_khayyam[217900]: 167 167
Oct 11 04:41:44 compute-0 podman[217882]: 2025-10-11 04:41:44.742109875 +0000 UTC m=+0.143934787 container attach 2658945bed4f5fcd53e171f3ae1903c108f1e91f98a60f20dfc033fec6370d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_khayyam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:41:44 compute-0 systemd[1]: libpod-2658945bed4f5fcd53e171f3ae1903c108f1e91f98a60f20dfc033fec6370d17.scope: Deactivated successfully.
Oct 11 04:41:44 compute-0 podman[217882]: 2025-10-11 04:41:44.742836784 +0000 UTC m=+0.144661656 container died 2658945bed4f5fcd53e171f3ae1903c108f1e91f98a60f20dfc033fec6370d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_khayyam, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:41:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-249ca3e3dfd3a86b781ba04ce3c4278f649f9d8983d5a1621a5dfa7330762eef-merged.mount: Deactivated successfully.
Oct 11 04:41:44 compute-0 podman[217882]: 2025-10-11 04:41:44.784809675 +0000 UTC m=+0.186634557 container remove 2658945bed4f5fcd53e171f3ae1903c108f1e91f98a60f20dfc033fec6370d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:41:44 compute-0 systemd[1]: libpod-conmon-2658945bed4f5fcd53e171f3ae1903c108f1e91f98a60f20dfc033fec6370d17.scope: Deactivated successfully.
Oct 11 04:41:44 compute-0 sshd-session[215868]: Failed password for invalid user postgres from 221.159.21.170 port 33368 ssh2
Oct 11 04:41:44 compute-0 podman[217967]: 2025-10-11 04:41:44.943368594 +0000 UTC m=+0.047048305 container create b7b43cabbe9df4e9cd339a7144c5d9e6efee6f74ae5253c00db0a6ce1125b6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:41:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:44 compute-0 systemd[1]: Started libpod-conmon-b7b43cabbe9df4e9cd339a7144c5d9e6efee6f74ae5253c00db0a6ce1125b6be.scope.
Oct 11 04:41:44 compute-0 sudo[218012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdimwipqglmkijevabsozbqktajyygnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157704.11931-1236-175115291986128/AnsiballZ_file.py'
Oct 11 04:41:44 compute-0 sudo[218012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3023aa87527769f79b54aa3a742a5ea68d91676e0f33055cfc6980abd66fb22c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3023aa87527769f79b54aa3a742a5ea68d91676e0f33055cfc6980abd66fb22c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3023aa87527769f79b54aa3a742a5ea68d91676e0f33055cfc6980abd66fb22c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3023aa87527769f79b54aa3a742a5ea68d91676e0f33055cfc6980abd66fb22c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:45 compute-0 podman[217967]: 2025-10-11 04:41:44.923950984 +0000 UTC m=+0.027630695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:41:45 compute-0 podman[217967]: 2025-10-11 04:41:45.040555833 +0000 UTC m=+0.144235574 container init b7b43cabbe9df4e9cd339a7144c5d9e6efee6f74ae5253c00db0a6ce1125b6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wiles, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:41:45 compute-0 podman[217967]: 2025-10-11 04:41:45.050582706 +0000 UTC m=+0.154262407 container start b7b43cabbe9df4e9cd339a7144c5d9e6efee6f74ae5253c00db0a6ce1125b6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wiles, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:41:45 compute-0 podman[217967]: 2025-10-11 04:41:45.06636014 +0000 UTC m=+0.170039851 container attach b7b43cabbe9df4e9cd339a7144c5d9e6efee6f74ae5253c00db0a6ce1125b6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:41:45 compute-0 python3.9[218017]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:45 compute-0 sudo[218012]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:45 compute-0 ceph-mon[74243]: pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:45 compute-0 sudo[218173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yssbdifwpmziwblpkzizyszjfaewtjem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157705.4288065-1249-277384245635657/AnsiballZ_command.py'
Oct 11 04:41:45 compute-0 jolly_wiles[218014]: {
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:     "0": [
Oct 11 04:41:45 compute-0 sudo[218173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:         {
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "devices": [
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "/dev/loop3"
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             ],
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "lv_name": "ceph_lv0",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "lv_size": "21470642176",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "name": "ceph_lv0",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "tags": {
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.cluster_name": "ceph",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.crush_device_class": "",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.encrypted": "0",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.osd_id": "0",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.type": "block",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.vdo": "0"
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             },
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "type": "block",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "vg_name": "ceph_vg0"
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:         }
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:     ],
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:     "1": [
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:         {
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "devices": [
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "/dev/loop4"
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             ],
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "lv_name": "ceph_lv1",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "lv_size": "21470642176",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "name": "ceph_lv1",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "tags": {
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.cluster_name": "ceph",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.crush_device_class": "",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.encrypted": "0",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.osd_id": "1",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.type": "block",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.vdo": "0"
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             },
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "type": "block",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "vg_name": "ceph_vg1"
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:         }
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:     ],
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:     "2": [
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:         {
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "devices": [
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "/dev/loop5"
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             ],
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "lv_name": "ceph_lv2",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "lv_size": "21470642176",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "name": "ceph_lv2",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "tags": {
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.cluster_name": "ceph",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.crush_device_class": "",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.encrypted": "0",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.osd_id": "2",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.type": "block",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:                 "ceph.vdo": "0"
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             },
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "type": "block",
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:             "vg_name": "ceph_vg2"
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:         }
Oct 11 04:41:45 compute-0 jolly_wiles[218014]:     ]
Oct 11 04:41:45 compute-0 jolly_wiles[218014]: }
Oct 11 04:41:45 compute-0 systemd[1]: libpod-b7b43cabbe9df4e9cd339a7144c5d9e6efee6f74ae5253c00db0a6ce1125b6be.scope: Deactivated successfully.
Oct 11 04:41:45 compute-0 podman[217967]: 2025-10-11 04:41:45.832496916 +0000 UTC m=+0.936176627 container died b7b43cabbe9df4e9cd339a7144c5d9e6efee6f74ae5253c00db0a6ce1125b6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:41:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3023aa87527769f79b54aa3a742a5ea68d91676e0f33055cfc6980abd66fb22c-merged.mount: Deactivated successfully.
Oct 11 04:41:45 compute-0 podman[217967]: 2025-10-11 04:41:45.937768438 +0000 UTC m=+1.041448129 container remove b7b43cabbe9df4e9cd339a7144c5d9e6efee6f74ae5253c00db0a6ce1125b6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:41:45 compute-0 systemd[1]: libpod-conmon-b7b43cabbe9df4e9cd339a7144c5d9e6efee6f74ae5253c00db0a6ce1125b6be.scope: Deactivated successfully.
Oct 11 04:41:45 compute-0 sudo[217743]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:46 compute-0 sudo[218188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:41:46 compute-0 sudo[218188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:41:46 compute-0 sudo[218188]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:46 compute-0 python3.9[218175]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:41:46 compute-0 sudo[218213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:41:46 compute-0 sudo[218213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:41:46 compute-0 sudo[218213]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:46 compute-0 sudo[218173]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:46 compute-0 sudo[218239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:41:46 compute-0 sudo[218239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:41:46 compute-0 sudo[218239]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:46 compute-0 sudo[218288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:41:46 compute-0 sudo[218288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:41:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:46 compute-0 podman[218406]: 2025-10-11 04:41:46.515989214 +0000 UTC m=+0.062748457 container create a5d0bafdf9c01df6220e8f8fb9e00ea24b4612acd284613ed1efb58fad8fcebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ramanujan, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:41:46 compute-0 systemd[1]: Started libpod-conmon-a5d0bafdf9c01df6220e8f8fb9e00ea24b4612acd284613ed1efb58fad8fcebe.scope.
Oct 11 04:41:46 compute-0 podman[218406]: 2025-10-11 04:41:46.473361256 +0000 UTC m=+0.020120539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:41:46 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:41:46 compute-0 podman[218406]: 2025-10-11 04:41:46.597074171 +0000 UTC m=+0.143833504 container init a5d0bafdf9c01df6220e8f8fb9e00ea24b4612acd284613ed1efb58fad8fcebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ramanujan, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:41:46 compute-0 podman[218406]: 2025-10-11 04:41:46.60925111 +0000 UTC m=+0.156010373 container start a5d0bafdf9c01df6220e8f8fb9e00ea24b4612acd284613ed1efb58fad8fcebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:41:46 compute-0 confident_ramanujan[218423]: 167 167
Oct 11 04:41:46 compute-0 podman[218406]: 2025-10-11 04:41:46.612574508 +0000 UTC m=+0.159333801 container attach a5d0bafdf9c01df6220e8f8fb9e00ea24b4612acd284613ed1efb58fad8fcebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ramanujan, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:41:46 compute-0 systemd[1]: libpod-a5d0bafdf9c01df6220e8f8fb9e00ea24b4612acd284613ed1efb58fad8fcebe.scope: Deactivated successfully.
Oct 11 04:41:46 compute-0 conmon[218423]: conmon a5d0bafdf9c01df6220e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a5d0bafdf9c01df6220e8f8fb9e00ea24b4612acd284613ed1efb58fad8fcebe.scope/container/memory.events
Oct 11 04:41:46 compute-0 podman[218406]: 2025-10-11 04:41:46.614151499 +0000 UTC m=+0.160910812 container died a5d0bafdf9c01df6220e8f8fb9e00ea24b4612acd284613ed1efb58fad8fcebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:41:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-19a259b4dac820b0af6452bedd8706a86d2a70e0dc6c1ec32400a80a9f51689b-merged.mount: Deactivated successfully.
Oct 11 04:41:46 compute-0 podman[218406]: 2025-10-11 04:41:46.644966697 +0000 UTC m=+0.191725950 container remove a5d0bafdf9c01df6220e8f8fb9e00ea24b4612acd284613ed1efb58fad8fcebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:41:46 compute-0 systemd[1]: libpod-conmon-a5d0bafdf9c01df6220e8f8fb9e00ea24b4612acd284613ed1efb58fad8fcebe.scope: Deactivated successfully.
Oct 11 04:41:46 compute-0 podman[218491]: 2025-10-11 04:41:46.832401964 +0000 UTC m=+0.042502187 container create 351911627f7f343e90335dd45bb94f8db1712a99132de3ce074817b44827e9bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:41:46 compute-0 systemd[1]: Started libpod-conmon-351911627f7f343e90335dd45bb94f8db1712a99132de3ce074817b44827e9bf.scope.
Oct 11 04:41:46 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:41:46 compute-0 podman[218491]: 2025-10-11 04:41:46.815850265 +0000 UTC m=+0.025950498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:41:46 compute-0 sudo[218537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzogdjnajtriuvoqxyqefiawvrreadjx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760157706.2866137-1257-111854034971146/AnsiballZ_edpm_nftables_from_files.py'
Oct 11 04:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7fb63560f33cf9b7716f916be601cc7ac140a08d1e97c64060ce5904dc373f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:46 compute-0 sudo[218537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7fb63560f33cf9b7716f916be601cc7ac140a08d1e97c64060ce5904dc373f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7fb63560f33cf9b7716f916be601cc7ac140a08d1e97c64060ce5904dc373f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7fb63560f33cf9b7716f916be601cc7ac140a08d1e97c64060ce5904dc373f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:46 compute-0 podman[218491]: 2025-10-11 04:41:46.927632736 +0000 UTC m=+0.137732979 container init 351911627f7f343e90335dd45bb94f8db1712a99132de3ce074817b44827e9bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_stonebraker, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:41:46 compute-0 podman[218491]: 2025-10-11 04:41:46.939936987 +0000 UTC m=+0.150037240 container start 351911627f7f343e90335dd45bb94f8db1712a99132de3ce074817b44827e9bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_stonebraker, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:41:46 compute-0 podman[218491]: 2025-10-11 04:41:46.943844426 +0000 UTC m=+0.153944679 container attach 351911627f7f343e90335dd45bb94f8db1712a99132de3ce074817b44827e9bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_stonebraker, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:41:47 compute-0 python3[218540]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 11 04:41:47 compute-0 sudo[218537]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:47 compute-0 sshd-session[215868]: Connection closed by invalid user postgres 221.159.21.170 port 33368 [preauth]
Oct 11 04:41:47 compute-0 ceph-mon[74243]: pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:47 compute-0 sudo[218700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cksfimffywkeaqjzxmobdjxwufwopong ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157707.3267643-1265-265970319766904/AnsiballZ_stat.py'
Oct 11 04:41:47 compute-0 sudo[218700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:47 compute-0 python3.9[218705]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]: {
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:         "osd_id": 1,
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:         "type": "bluestore"
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:     },
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:         "osd_id": 0,
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:         "type": "bluestore"
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:     },
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:         "osd_id": 2,
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:         "type": "bluestore"
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]:     }
Oct 11 04:41:47 compute-0 hardcore_stonebraker[218535]: }
Oct 11 04:41:48 compute-0 systemd[1]: libpod-351911627f7f343e90335dd45bb94f8db1712a99132de3ce074817b44827e9bf.scope: Deactivated successfully.
Oct 11 04:41:48 compute-0 systemd[1]: libpod-351911627f7f343e90335dd45bb94f8db1712a99132de3ce074817b44827e9bf.scope: Consumed 1.080s CPU time.
Oct 11 04:41:48 compute-0 podman[218491]: 2025-10-11 04:41:48.019760155 +0000 UTC m=+1.229860388 container died 351911627f7f343e90335dd45bb94f8db1712a99132de3ce074817b44827e9bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_stonebraker, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:41:48 compute-0 sudo[218700]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7fb63560f33cf9b7716f916be601cc7ac140a08d1e97c64060ce5904dc373f2-merged.mount: Deactivated successfully.
Oct 11 04:41:48 compute-0 podman[218491]: 2025-10-11 04:41:48.078748999 +0000 UTC m=+1.288849232 container remove 351911627f7f343e90335dd45bb94f8db1712a99132de3ce074817b44827e9bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_stonebraker, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:41:48 compute-0 systemd[1]: libpod-conmon-351911627f7f343e90335dd45bb94f8db1712a99132de3ce074817b44827e9bf.scope: Deactivated successfully.
Oct 11 04:41:48 compute-0 sudo[218288]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:41:48 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:41:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:41:48 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:41:48 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 255f985a-eae2-46bb-8c58-5659b3371bb3 does not exist
Oct 11 04:41:48 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 3c7928ed-9c03-4c27-b9b4-afbfe77ddb7f does not exist
Oct 11 04:41:48 compute-0 sudo[218759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:41:48 compute-0 sudo[218759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:41:48 compute-0 sudo[218759]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:48 compute-0 sudo[218807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:41:48 compute-0 sudo[218856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpdabwgrfqnabblbumfcanusdrradqtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157707.3267643-1265-265970319766904/AnsiballZ_file.py'
Oct 11 04:41:48 compute-0 sudo[218807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:41:48 compute-0 sudo[218856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:48 compute-0 sudo[218807]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:48 compute-0 python3.9[218860]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:48 compute-0 sudo[218856]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:49 compute-0 sudo[219011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uftauvaejdibvnbtaodvvqnschumpjcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157708.6847427-1277-20451234476177/AnsiballZ_stat.py'
Oct 11 04:41:49 compute-0 sudo[219011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:49 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:41:49 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:41:49 compute-0 ceph-mon[74243]: pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:49 compute-0 python3.9[219013]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:41:49 compute-0 sudo[219011]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:49 compute-0 sudo[219089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbidurltztlrvkrfdwtjlkfecewnvhui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157708.6847427-1277-20451234476177/AnsiballZ_file.py'
Oct 11 04:41:49 compute-0 sudo[219089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:49 compute-0 python3.9[219091]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:49 compute-0 sudo[219089]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:50 compute-0 sudo[219241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krqovkzyzlgzsgizipexksgaiobwpjkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157710.1455112-1289-170911572071405/AnsiballZ_stat.py'
Oct 11 04:41:50 compute-0 sudo[219241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:50 compute-0 python3.9[219243]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:41:50 compute-0 sudo[219241]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:51 compute-0 sudo[219319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usjtyvnonqgrpdkulilxuxcfujufodyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157710.1455112-1289-170911572071405/AnsiballZ_file.py'
Oct 11 04:41:51 compute-0 sudo[219319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:51 compute-0 python3.9[219321]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:51 compute-0 sudo[219319]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:51 compute-0 ceph-mon[74243]: pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:51 compute-0 sudo[219471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlsfvuwazfywofvdoturinttwineufcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157711.5619385-1301-235615598261680/AnsiballZ_stat.py'
Oct 11 04:41:51 compute-0 sudo[219471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:52 compute-0 python3.9[219473]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:41:52 compute-0 sudo[219471]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:52 compute-0 sudo[219549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saiidndtzkbvwwwoskbdshhjmfkjjlka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157711.5619385-1301-235615598261680/AnsiballZ_file.py'
Oct 11 04:41:52 compute-0 sudo[219549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:52 compute-0 python3.9[219551]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:52 compute-0 sudo[219549]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:53 compute-0 sudo[219701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnsjrkvrqbfexthkawvtmocdgzxurehw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157712.874076-1313-209918486344623/AnsiballZ_stat.py'
Oct 11 04:41:53 compute-0 sudo[219701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:53 compute-0 ceph-mon[74243]: pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:53 compute-0 python3.9[219703]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:41:53 compute-0 sudo[219701]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:54 compute-0 sudo[219826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbzspwrsuwdzipaldtcdqxwnzhfjoqor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157712.874076-1313-209918486344623/AnsiballZ_copy.py'
Oct 11 04:41:54 compute-0 sudo[219826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:54 compute-0 python3.9[219828]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760157712.874076-1313-209918486344623/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:54 compute-0 sudo[219826]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:54 compute-0 sudo[219978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbaslesunlnprtnkuvpvqpsnkhkkowvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157714.511105-1328-262179625686645/AnsiballZ_file.py'
Oct 11 04:41:54 compute-0 sudo[219978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:55 compute-0 python3.9[219980]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:55 compute-0 sudo[219978]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:55 compute-0 ceph-mon[74243]: pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:55 compute-0 sudo[220130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyagvyngtahltobliluzqgysdojnakuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157715.319884-1336-205574020284209/AnsiballZ_command.py'
Oct 11 04:41:55 compute-0 sudo[220130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:55 compute-0 python3.9[220132]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:41:56 compute-0 sudo[220130]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:41:56
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'vms', 'images', 'cephfs.cephfs.meta', '.rgw.root']
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:41:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:56 compute-0 sudo[220285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfetvuplierkithtkdthkpxmrkdwreoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157716.229938-1344-157946150892312/AnsiballZ_blockinfile.py'
Oct 11 04:41:56 compute-0 sudo[220285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:57 compute-0 python3.9[220287]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:41:57 compute-0 sudo[220285]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:57 compute-0 ceph-mon[74243]: pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:57 compute-0 sudo[220437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wofbtmkdkurteyirxrgxqiwjiduhgnws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157717.3090518-1353-168440529809085/AnsiballZ_command.py'
Oct 11 04:41:57 compute-0 sudo[220437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:57 compute-0 python3.9[220439]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:41:57 compute-0 sudo[220437]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:58 compute-0 sudo[220590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xklabndqkgidrtrclrcmsytibgaklblq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157718.1655147-1361-254508397291217/AnsiballZ_stat.py'
Oct 11 04:41:58 compute-0 sudo[220590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:58 compute-0 python3.9[220592]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:41:58 compute-0 sudo[220590]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:59 compute-0 sudo[220744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwuphvjaioxfoyhnantucrlrvmeaihpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157718.8685634-1369-64859664542434/AnsiballZ_command.py'
Oct 11 04:41:59 compute-0 sudo[220744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:41:59 compute-0 python3.9[220746]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:41:59 compute-0 sudo[220744]: pam_unix(sudo:session): session closed for user root
Oct 11 04:41:59 compute-0 ceph-mon[74243]: pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:59 compute-0 sudo[220899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukiqjsqgksjexbcnfhggeaisgflmnvvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157719.6735308-1377-240208705672224/AnsiballZ_file.py'
Oct 11 04:42:00 compute-0 sudo[220899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:00 compute-0 python3.9[220901]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:42:00 compute-0 sudo[220899]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:00 compute-0 sudo[221051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgrmsxwutjofslihulipbcgzmahmyktt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157720.4012237-1385-67145396506642/AnsiballZ_stat.py'
Oct 11 04:42:00 compute-0 sudo[221051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:00 compute-0 python3.9[221053]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:42:00 compute-0 sudo[221051]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:01 compute-0 sudo[221174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btpmcpddaewcuqibndmneitxnmieclhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157720.4012237-1385-67145396506642/AnsiballZ_copy.py'
Oct 11 04:42:01 compute-0 sudo[221174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:01 compute-0 ceph-mon[74243]: pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:01 compute-0 python3.9[221176]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760157720.4012237-1385-67145396506642/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:42:01 compute-0 sudo[221174]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:02 compute-0 sudo[221326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mggyzpjidezpwxwkhqnlulaujpflxapg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157721.9435415-1400-161120708916683/AnsiballZ_stat.py'
Oct 11 04:42:02 compute-0 sudo[221326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:02 compute-0 python3.9[221328]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:42:02 compute-0 sudo[221326]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:03 compute-0 sudo[221449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luxgvcfcifqylzfvgvusdoscgwbyxuix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157721.9435415-1400-161120708916683/AnsiballZ_copy.py'
Oct 11 04:42:03 compute-0 sudo[221449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:03 compute-0 python3.9[221451]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760157721.9435415-1400-161120708916683/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:42:03 compute-0 sudo[221449]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:03 compute-0 ceph-mon[74243]: pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:03 compute-0 sudo[221601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssmiolghkmuxushmyrscqqrhsehigdij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157723.5564857-1415-174454805985603/AnsiballZ_stat.py'
Oct 11 04:42:03 compute-0 sudo[221601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:04 compute-0 python3.9[221603]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:42:04 compute-0 sudo[221601]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:04 compute-0 sudo[221724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxsogjcxowowxoouogcnascuckcbhqho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157723.5564857-1415-174454805985603/AnsiballZ_copy.py'
Oct 11 04:42:04 compute-0 sudo[221724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:04 compute-0 python3.9[221726]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760157723.5564857-1415-174454805985603/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:42:04 compute-0 sudo[221724]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:05 compute-0 sudo[221876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imvwqvtkxkmqccstitzcyqruiexaqqih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157725.0903456-1430-150495362728430/AnsiballZ_systemd.py'
Oct 11 04:42:05 compute-0 sudo[221876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:05 compute-0 ceph-mon[74243]: pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:05 compute-0 python3.9[221878]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:42:05 compute-0 systemd[1]: Reloading.
Oct 11 04:42:05 compute-0 systemd-rc-local-generator[221906]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:42:05 compute-0 systemd-sysv-generator[221910]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:42:06 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Oct 11 04:42:06 compute-0 sudo[221876]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:06 compute-0 sudo[222067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikclpmekpqpdemefoduteuzqpwrhbtps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157726.4459488-1438-242397886258229/AnsiballZ_systemd.py'
Oct 11 04:42:06 compute-0 sudo[222067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:07 compute-0 python3.9[222069]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 11 04:42:07 compute-0 systemd[1]: Reloading.
Oct 11 04:42:07 compute-0 systemd-rc-local-generator[222093]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:42:07 compute-0 systemd-sysv-generator[222097]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:42:07 compute-0 ceph-mon[74243]: pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:07 compute-0 systemd[1]: Reloading.
Oct 11 04:42:07 compute-0 systemd-rc-local-generator[222135]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:42:07 compute-0 systemd-sysv-generator[222139]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:42:07 compute-0 sudo[222067]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:08 compute-0 podman[222142]: 2025-10-11 04:42:08.097707382 +0000 UTC m=+0.090262197 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 11 04:42:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:08 compute-0 sshd-session[162490]: Connection closed by 192.168.122.30 port 60728
Oct 11 04:42:08 compute-0 sshd-session[162483]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:42:08 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Oct 11 04:42:08 compute-0 systemd[1]: session-50.scope: Consumed 3min 51.484s CPU time.
Oct 11 04:42:08 compute-0 systemd-logind[801]: Session 50 logged out. Waiting for processes to exit.
Oct 11 04:42:08 compute-0 systemd-logind[801]: Removed session 50.
Oct 11 04:42:09 compute-0 ceph-mon[74243]: pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:42:11.003 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:42:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:42:11.004 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:42:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:42:11.004 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:42:11 compute-0 ceph-mon[74243]: pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:13 compute-0 ceph-mon[74243]: pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:13 compute-0 sshd-session[222191]: Accepted publickey for zuul from 192.168.122.30 port 41074 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:42:13 compute-0 systemd-logind[801]: New session 51 of user zuul.
Oct 11 04:42:13 compute-0 systemd[1]: Started Session 51 of User zuul.
Oct 11 04:42:13 compute-0 sshd-session[222191]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:42:13 compute-0 podman[222195]: 2025-10-11 04:42:13.886219451 +0000 UTC m=+0.115386583 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct 11 04:42:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:14 compute-0 python3.9[222364]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:42:15 compute-0 ceph-mon[74243]: pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:16 compute-0 sudo[222518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxnxjlehxzjmtktpdbzlbtnftkyuttdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157735.6530552-34-260884721808545/AnsiballZ_file.py'
Oct 11 04:42:16 compute-0 sudo[222518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:16 compute-0 python3.9[222520]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:42:16 compute-0 sudo[222518]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:16 compute-0 sudo[222670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utyvskbrbdvfbnedatquwnoehtgqysov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157736.5797026-34-105159635568972/AnsiballZ_file.py'
Oct 11 04:42:16 compute-0 sudo[222670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:17 compute-0 python3.9[222672]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:42:17 compute-0 sudo[222670]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:17 compute-0 ceph-mon[74243]: pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:17 compute-0 sudo[222822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxvysamdlhidcptskwaooeirsiwunahu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157737.344877-34-230767041695238/AnsiballZ_file.py'
Oct 11 04:42:17 compute-0 sudo[222822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:17 compute-0 python3.9[222824]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:42:17 compute-0 sudo[222822]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:18 compute-0 sudo[222974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjxcqzlmdnmobegtyilndflimtnxidcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157738.005996-34-53550358044813/AnsiballZ_file.py'
Oct 11 04:42:18 compute-0 sudo[222974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:18 compute-0 python3.9[222976]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 11 04:42:18 compute-0 sudo[222974]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:19 compute-0 sudo[223126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzapmusklzjjmsmsptkkaacyalpdiutu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157738.7110631-34-19889035986961/AnsiballZ_file.py'
Oct 11 04:42:19 compute-0 sudo[223126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:19 compute-0 python3.9[223128]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:42:19 compute-0 sudo[223126]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:19 compute-0 ceph-mon[74243]: pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:19 compute-0 sudo[223278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfvpcfxpnswfasogladnlhjlphqfherx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157739.463325-70-189814841662137/AnsiballZ_stat.py'
Oct 11 04:42:19 compute-0 sudo[223278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:20 compute-0 python3.9[223280]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:42:20 compute-0 sudo[223278]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:21 compute-0 sudo[223432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vopymxnimaapesyckegtwbjqlalgrgjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157740.4257522-78-15334213391916/AnsiballZ_systemd.py'
Oct 11 04:42:21 compute-0 sudo[223432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:21 compute-0 python3.9[223434]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:42:21 compute-0 systemd[1]: Reloading.
Oct 11 04:42:21 compute-0 ceph-mon[74243]: pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:21 compute-0 systemd-rc-local-generator[223461]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:42:21 compute-0 systemd-sysv-generator[223467]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:42:21 compute-0 sudo[223432]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:22 compute-0 ceph-mon[74243]: pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:22 compute-0 sudo[223621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aucvrnshntpjsfsnvpflaykcgdvkczdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157742.1895084-86-90608692054717/AnsiballZ_service_facts.py'
Oct 11 04:42:22 compute-0 sudo[223621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:23 compute-0 python3.9[223623]: ansible-ansible.builtin.service_facts Invoked
Oct 11 04:42:23 compute-0 network[223640]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 04:42:23 compute-0 network[223641]: 'network-scripts' will be removed from distribution in near future.
Oct 11 04:42:23 compute-0 network[223642]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 04:42:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:25 compute-0 ceph-mon[74243]: pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:42:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:42:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:42:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:42:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:42:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:42:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:26 compute-0 sudo[223621]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:27 compute-0 ceph-mon[74243]: pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:27 compute-0 sudo[223914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tubohqjhdxjjkgahiuvgxlpszoiukvva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157747.094348-94-25865476866081/AnsiballZ_systemd.py'
Oct 11 04:42:27 compute-0 sudo[223914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:27 compute-0 python3.9[223916]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:42:27 compute-0 systemd[1]: Reloading.
Oct 11 04:42:27 compute-0 systemd-rc-local-generator[223949]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:42:28 compute-0 systemd-sysv-generator[223952]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:42:28 compute-0 sudo[223914]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:29 compute-0 python3.9[224103]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:42:29 compute-0 ceph-mon[74243]: pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:29 compute-0 sudo[224253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fngiflhheghvxlqenefpamivbrxhxgda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157749.195152-111-86376158094946/AnsiballZ_podman_container.py'
Oct 11 04:42:29 compute-0 sudo[224253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:29 compute-0 python3.9[224255]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 11 04:42:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:30 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:42:30 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:42:31 compute-0 podman[224268]: 2025-10-11 04:42:31.369693802 +0000 UTC m=+1.345327882 image pull 5773abc4300b61c01f3353a0b9239f9a404bb272790b280574e4c56f72edaa72 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 11 04:42:31 compute-0 ceph-mon[74243]: pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:31 compute-0 podman[224330]: 2025-10-11 04:42:31.528826743 +0000 UTC m=+0.044029416 container create 4e76f7bc6aecc1df0d2190cb7fe2723ea5e488c02c35b9b1abaf3ac2134d0993 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 11 04:42:31 compute-0 NetworkManager[44888]: <info>  [1760157751.5604] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/21)
Oct 11 04:42:31 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 11 04:42:31 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 11 04:42:31 compute-0 kernel: veth0: entered allmulticast mode
Oct 11 04:42:31 compute-0 kernel: veth0: entered promiscuous mode
Oct 11 04:42:31 compute-0 NetworkManager[44888]: <info>  [1760157751.5806] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/22)
Oct 11 04:42:31 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 11 04:42:31 compute-0 kernel: podman0: port 1(veth0) entered forwarding state
Oct 11 04:42:31 compute-0 NetworkManager[44888]: <info>  [1760157751.5850] device (veth0): carrier: link connected
Oct 11 04:42:31 compute-0 NetworkManager[44888]: <info>  [1760157751.5856] device (podman0): carrier: link connected
Oct 11 04:42:31 compute-0 podman[224330]: 2025-10-11 04:42:31.51252153 +0000 UTC m=+0.027724223 image pull 5773abc4300b61c01f3353a0b9239f9a404bb272790b280574e4c56f72edaa72 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 11 04:42:31 compute-0 systemd-udevd[224361]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:42:31 compute-0 systemd-udevd[224363]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:42:31 compute-0 NetworkManager[44888]: <info>  [1760157751.6328] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:42:31 compute-0 NetworkManager[44888]: <info>  [1760157751.6353] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:42:31 compute-0 NetworkManager[44888]: <info>  [1760157751.6377] device (podman0): Activation: starting connection 'podman0' (c946df9e-d3b1-452d-9c5d-eb0399b77142)
Oct 11 04:42:31 compute-0 NetworkManager[44888]: <info>  [1760157751.6391] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 11 04:42:31 compute-0 NetworkManager[44888]: <info>  [1760157751.6398] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 11 04:42:31 compute-0 NetworkManager[44888]: <info>  [1760157751.6401] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 11 04:42:31 compute-0 NetworkManager[44888]: <info>  [1760157751.6405] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 11 04:42:31 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 04:42:31 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 04:42:31 compute-0 NetworkManager[44888]: <info>  [1760157751.6745] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 11 04:42:31 compute-0 NetworkManager[44888]: <info>  [1760157751.6750] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 11 04:42:31 compute-0 NetworkManager[44888]: <info>  [1760157751.6765] device (podman0): Activation: successful, device activated.
Oct 11 04:42:31 compute-0 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 11 04:42:31 compute-0 systemd[1]: Started libpod-conmon-4e76f7bc6aecc1df0d2190cb7fe2723ea5e488c02c35b9b1abaf3ac2134d0993.scope.
Oct 11 04:42:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:42:31 compute-0 podman[224330]: 2025-10-11 04:42:31.997122293 +0000 UTC m=+0.512325016 container init 4e76f7bc6aecc1df0d2190cb7fe2723ea5e488c02c35b9b1abaf3ac2134d0993 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 04:42:32 compute-0 podman[224330]: 2025-10-11 04:42:32.007536116 +0000 UTC m=+0.522738829 container start 4e76f7bc6aecc1df0d2190cb7fe2723ea5e488c02c35b9b1abaf3ac2134d0993 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:42:32 compute-0 podman[224330]: 2025-10-11 04:42:32.011780854 +0000 UTC m=+0.526983547 container attach 4e76f7bc6aecc1df0d2190cb7fe2723ea5e488c02c35b9b1abaf3ac2134d0993 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 11 04:42:32 compute-0 iscsid_config[224488]: iqn.1994-05.com.redhat:e0e1bf01d7e
Oct 11 04:42:32 compute-0 systemd[1]: libpod-4e76f7bc6aecc1df0d2190cb7fe2723ea5e488c02c35b9b1abaf3ac2134d0993.scope: Deactivated successfully.
Oct 11 04:42:32 compute-0 podman[224330]: 2025-10-11 04:42:32.015308433 +0000 UTC m=+0.530511136 container died 4e76f7bc6aecc1df0d2190cb7fe2723ea5e488c02c35b9b1abaf3ac2134d0993 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 11 04:42:32 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 11 04:42:32 compute-0 kernel: veth0 (unregistering): left allmulticast mode
Oct 11 04:42:32 compute-0 kernel: veth0 (unregistering): left promiscuous mode
Oct 11 04:42:32 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 11 04:42:32 compute-0 NetworkManager[44888]: <info>  [1760157752.0783] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:42:32 compute-0 systemd[1]: run-netns-netns\x2dad77b886\x2dc754\x2da209\x2d8468\x2d08a76be68146.mount: Deactivated successfully.
Oct 11 04:42:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ec2f21d7c334fb9106629f62d1d0e75bf89f692905e05c97f7738398ed9c0d2-merged.mount: Deactivated successfully.
Oct 11 04:42:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4e76f7bc6aecc1df0d2190cb7fe2723ea5e488c02c35b9b1abaf3ac2134d0993-userdata-shm.mount: Deactivated successfully.
Oct 11 04:42:32 compute-0 podman[224330]: 2025-10-11 04:42:32.413953019 +0000 UTC m=+0.929155722 container remove 4e76f7bc6aecc1df0d2190cb7fe2723ea5e488c02c35b9b1abaf3ac2134d0993 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:42:32 compute-0 python3.9[224255]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid:current-podified /usr/sbin/iscsi-iname
Oct 11 04:42:32 compute-0 systemd[1]: libpod-conmon-4e76f7bc6aecc1df0d2190cb7fe2723ea5e488c02c35b9b1abaf3ac2134d0993.scope: Deactivated successfully.
Oct 11 04:42:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:32 compute-0 python3.9[224255]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: 
                                             DEPRECATED command:
                                             It is recommended to use Quadlets for running containers and pods under systemd.
                                             
                                             Please refer to podman-systemd.unit(5) for details.
                                             Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 11 04:42:32 compute-0 sudo[224253]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:32 compute-0 unix_chkpwd[224632]: password check failed for user (root)
Oct 11 04:42:32 compute-0 sshd-session[218861]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.159.21.170  user=root
Oct 11 04:42:33 compute-0 sudo[224731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olpiuitbjvvfxyltimkuuoyggjhcvloo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157752.845905-119-82754154163097/AnsiballZ_stat.py'
Oct 11 04:42:33 compute-0 sudo[224731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:33 compute-0 python3.9[224733]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:42:33 compute-0 sudo[224731]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:33 compute-0 ceph-mon[74243]: pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:34 compute-0 sudo[224854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drdtvkbqiexlhlakmpunhbpbnmjqwthz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157752.845905-119-82754154163097/AnsiballZ_copy.py'
Oct 11 04:42:34 compute-0 sudo[224854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:34 compute-0 python3.9[224856]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760157752.845905-119-82754154163097/.source.iscsi _original_basename=.pycw092n follow=False checksum=480406dd7bd34b93fe9a6598381de55acb2fdc3a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:42:34 compute-0 sudo[224854]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:34 compute-0 sshd-session[218861]: Failed password for root from 221.159.21.170 port 35260 ssh2
Oct 11 04:42:34 compute-0 sudo[225006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zucgefgbinsdbmoivnscjmzdhjyavvgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157754.5644753-134-45631326661793/AnsiballZ_file.py'
Oct 11 04:42:34 compute-0 sudo[225006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:35 compute-0 python3.9[225008]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:42:35 compute-0 sudo[225006]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:35 compute-0 sshd-session[218861]: Connection closed by authenticating user root 221.159.21.170 port 35260 [preauth]
Oct 11 04:42:35 compute-0 ceph-mon[74243]: pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:35 compute-0 python3.9[225159]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:42:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:36 compute-0 sudo[225312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rglpepalambxgmnyoqglklbptxymiabq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157756.1129086-151-199445871026963/AnsiballZ_lineinfile.py'
Oct 11 04:42:36 compute-0 sudo[225312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:36 compute-0 python3.9[225314]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:42:36 compute-0 sudo[225312]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:37 compute-0 ceph-mon[74243]: pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:37 compute-0 sudo[225464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfzmgrrbiiflkblckmgtuuywcjljvnxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157757.2307706-160-68459737834784/AnsiballZ_file.py'
Oct 11 04:42:37 compute-0 sudo[225464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:37 compute-0 python3.9[225466]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:42:37 compute-0 sudo[225464]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:38 compute-0 sudo[225630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rciliksxndcuxmumeobhvbfiwnqizwna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157758.0443652-168-156814762647580/AnsiballZ_stat.py'
Oct 11 04:42:38 compute-0 sudo[225630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:38 compute-0 podman[225590]: 2025-10-11 04:42:38.490363938 +0000 UTC m=+0.131114751 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 11 04:42:38 compute-0 python3.9[225639]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:42:38 compute-0 sudo[225630]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:39 compute-0 sudo[225721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdkjodfmitaatqlxodibegtmdkseruch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157758.0443652-168-156814762647580/AnsiballZ_file.py'
Oct 11 04:42:39 compute-0 sudo[225721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:39 compute-0 python3.9[225723]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:42:39 compute-0 sudo[225721]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:39 compute-0 sshd-session[225131]: Invalid user test from 221.159.21.170 port 45646
Oct 11 04:42:39 compute-0 ceph-mon[74243]: pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:39 compute-0 sshd-session[225131]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 04:42:39 compute-0 sshd-session[225131]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.159.21.170
Oct 11 04:42:39 compute-0 sudo[225873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxwnwovhhrlwokiziemqpxeuihatxldz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157759.3991516-168-223915699340886/AnsiballZ_stat.py'
Oct 11 04:42:39 compute-0 sudo[225873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:39 compute-0 python3.9[225875]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:42:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:39 compute-0 sudo[225873]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:40 compute-0 sudo[225951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyxtozdyyhdqzxkfuovrqquoulnihydb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157759.3991516-168-223915699340886/AnsiballZ_file.py'
Oct 11 04:42:40 compute-0 sudo[225951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:40 compute-0 python3.9[225953]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:42:40 compute-0 sudo[225951]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:41 compute-0 sudo[226103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyjlpixlrmvneffvvixbjkkbyhbzpabh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157760.6879961-191-221427196400232/AnsiballZ_file.py'
Oct 11 04:42:41 compute-0 sudo[226103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:41 compute-0 python3.9[226105]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:42:41 compute-0 sudo[226103]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:41 compute-0 sshd-session[225131]: Failed password for invalid user test from 221.159.21.170 port 45646 ssh2
Oct 11 04:42:41 compute-0 ceph-mon[74243]: pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:42 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 04:42:42 compute-0 sudo[226255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvsmtuaccdlqgafllldsnzoyadlfqovb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157761.7609365-199-127933852947545/AnsiballZ_stat.py'
Oct 11 04:42:42 compute-0 sudo[226255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:42 compute-0 python3.9[226257]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:42:42 compute-0 sudo[226255]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:42 compute-0 sudo[226333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkujmkmdkkkdnsxtfnisonkhuaclddbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157761.7609365-199-127933852947545/AnsiballZ_file.py'
Oct 11 04:42:42 compute-0 sudo[226333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:42 compute-0 python3.9[226335]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:42:42 compute-0 sudo[226333]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:43 compute-0 sshd-session[225131]: Connection closed by invalid user test 221.159.21.170 port 45646 [preauth]
Oct 11 04:42:43 compute-0 sudo[226485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijhgrcfybjqurkfdifrasvcoqvfdiyap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157763.1447167-211-131591978357618/AnsiballZ_stat.py'
Oct 11 04:42:43 compute-0 sudo[226485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:43 compute-0 ceph-mon[74243]: pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:43 compute-0 python3.9[226487]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:42:43 compute-0 sudo[226485]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:44 compute-0 sudo[226578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghmfsxcglgeohimfbhiuporijqpolmpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157763.1447167-211-131591978357618/AnsiballZ_file.py'
Oct 11 04:42:44 compute-0 sudo[226578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:44 compute-0 podman[226539]: 2025-10-11 04:42:44.139286551 +0000 UTC m=+0.080482269 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:42:44 compute-0 python3.9[226586]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:42:44 compute-0 sudo[226578]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:44 compute-0 ceph-mon[74243]: pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:44 compute-0 sudo[226736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpvtaxpxthpdpgthcvelvmjbffvxtgzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157764.6176572-223-32834955284347/AnsiballZ_systemd.py'
Oct 11 04:42:44 compute-0 sudo[226736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:45 compute-0 python3.9[226738]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:42:45 compute-0 systemd[1]: Reloading.
Oct 11 04:42:45 compute-0 systemd-sysv-generator[226770]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:42:45 compute-0 systemd-rc-local-generator[226766]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:42:45 compute-0 sudo[226736]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:46 compute-0 sudo[226926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhidooxmhkbuzxqvwmanvttqkmaccrmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157766.03289-231-165340385209315/AnsiballZ_stat.py'
Oct 11 04:42:46 compute-0 sudo[226926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:46 compute-0 python3.9[226928]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:42:46 compute-0 sudo[226926]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:46 compute-0 sudo[227004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxmclgavgdcvlphjqobxolamgjcqmxir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157766.03289-231-165340385209315/AnsiballZ_file.py'
Oct 11 04:42:46 compute-0 sudo[227004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:47 compute-0 python3.9[227006]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:42:47 compute-0 sudo[227004]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:47 compute-0 ceph-mon[74243]: pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:47 compute-0 sudo[227156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuutyvhvsimeaadwphodzyrmeaitdkpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157767.3655784-243-109826865817434/AnsiballZ_stat.py'
Oct 11 04:42:47 compute-0 sudo[227156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:47 compute-0 python3.9[227158]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:42:47 compute-0 sudo[227156]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:48 compute-0 sudo[227234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dogabowjsitninsyykihgqjgtnyqzdcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157767.3655784-243-109826865817434/AnsiballZ_file.py'
Oct 11 04:42:48 compute-0 sudo[227234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:48 compute-0 sudo[227237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:42:48 compute-0 sudo[227237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:42:48 compute-0 sudo[227237]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:48 compute-0 python3.9[227236]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:42:48 compute-0 sudo[227234]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:48 compute-0 sudo[227262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:42:48 compute-0 sudo[227262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:42:48 compute-0 sudo[227262]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:48 compute-0 sudo[227287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:42:48 compute-0 sudo[227287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:42:48 compute-0 sudo[227287]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:48 compute-0 sudo[227336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:42:48 compute-0 sudo[227336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:42:48 compute-0 sudo[227501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brpjrifursajljqtbwirmnjcpiyfhsum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157768.5620346-255-225215322962462/AnsiballZ_systemd.py'
Oct 11 04:42:48 compute-0 sudo[227501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:49 compute-0 sudo[227336]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:42:49 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:42:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:42:49 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:42:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:42:49 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:42:49 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev f49c8684-eccb-4db1-90a5-88e0d295ed49 does not exist
Oct 11 04:42:49 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev afd25e65-9ffe-418d-a276-428a0b24b17e does not exist
Oct 11 04:42:49 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev c45cb9c6-ac8e-4d08-8a39-b4492cc8b51a does not exist
Oct 11 04:42:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:42:49 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:42:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:42:49 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:42:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:42:49 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:42:49 compute-0 sudo[227518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:42:49 compute-0 sudo[227518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:42:49 compute-0 sudo[227518]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:49 compute-0 python3.9[227505]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:42:49 compute-0 systemd[1]: Reloading.
Oct 11 04:42:49 compute-0 systemd-rc-local-generator[227593]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:42:49 compute-0 systemd-sysv-generator[227597]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:42:49 compute-0 ceph-mon[74243]: pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:49 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:42:49 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:42:49 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:42:49 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:42:49 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:42:49 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:42:49 compute-0 sudo[227543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:42:49 compute-0 sudo[227543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:42:49 compute-0 sudo[227543]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:49 compute-0 systemd[1]: Starting Create netns directory...
Oct 11 04:42:49 compute-0 sudo[227604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:42:49 compute-0 sudo[227604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:42:49 compute-0 sudo[227604]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:49 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 04:42:49 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 04:42:49 compute-0 systemd[1]: Finished Create netns directory.
Oct 11 04:42:49 compute-0 sudo[227501]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:49 compute-0 sudo[227633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:42:49 compute-0 sudo[227633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:42:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:50 compute-0 podman[227751]: 2025-10-11 04:42:50.062930961 +0000 UTC m=+0.050405757 container create 760fd47cabc31725cc619ad2ac8d445f3b6a18178dfc6242a65b8318d1e8cafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:42:50 compute-0 systemd[1]: Started libpod-conmon-760fd47cabc31725cc619ad2ac8d445f3b6a18178dfc6242a65b8318d1e8cafa.scope.
Oct 11 04:42:50 compute-0 podman[227751]: 2025-10-11 04:42:50.036231785 +0000 UTC m=+0.023706631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:42:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:42:50 compute-0 podman[227751]: 2025-10-11 04:42:50.153029083 +0000 UTC m=+0.140503919 container init 760fd47cabc31725cc619ad2ac8d445f3b6a18178dfc6242a65b8318d1e8cafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:42:50 compute-0 podman[227751]: 2025-10-11 04:42:50.164233237 +0000 UTC m=+0.151708003 container start 760fd47cabc31725cc619ad2ac8d445f3b6a18178dfc6242a65b8318d1e8cafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:42:50 compute-0 podman[227751]: 2025-10-11 04:42:50.167197802 +0000 UTC m=+0.154672608 container attach 760fd47cabc31725cc619ad2ac8d445f3b6a18178dfc6242a65b8318d1e8cafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:42:50 compute-0 condescending_driscoll[227804]: 167 167
Oct 11 04:42:50 compute-0 systemd[1]: libpod-760fd47cabc31725cc619ad2ac8d445f3b6a18178dfc6242a65b8318d1e8cafa.scope: Deactivated successfully.
Oct 11 04:42:50 compute-0 podman[227751]: 2025-10-11 04:42:50.171261195 +0000 UTC m=+0.158735991 container died 760fd47cabc31725cc619ad2ac8d445f3b6a18178dfc6242a65b8318d1e8cafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:42:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-2eb4a2d1e09b500e7a839f62b66ac6ede077e6fe228c03fb40246ce1a6ccb894-merged.mount: Deactivated successfully.
Oct 11 04:42:50 compute-0 podman[227751]: 2025-10-11 04:42:50.210603531 +0000 UTC m=+0.198078297 container remove 760fd47cabc31725cc619ad2ac8d445f3b6a18178dfc6242a65b8318d1e8cafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:42:50 compute-0 systemd[1]: libpod-conmon-760fd47cabc31725cc619ad2ac8d445f3b6a18178dfc6242a65b8318d1e8cafa.scope: Deactivated successfully.
Oct 11 04:42:50 compute-0 sudo[227883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzikypyhphjcuxlskzbnqegnpmkohxcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157769.949673-265-17909574374090/AnsiballZ_file.py'
Oct 11 04:42:50 compute-0 sudo[227883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:50 compute-0 podman[227891]: 2025-10-11 04:42:50.362915479 +0000 UTC m=+0.034868684 container create 2dc859b4437cfba18af46f4016529a4626d4c7414e22afdeac20f5f4bec1591b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:42:50 compute-0 systemd[1]: Started libpod-conmon-2dc859b4437cfba18af46f4016529a4626d4c7414e22afdeac20f5f4bec1591b.scope.
Oct 11 04:42:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4218ca7f21190e6b4ae4174967e71125d7d340469995b67ba41dd1ef82870aa1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4218ca7f21190e6b4ae4174967e71125d7d340469995b67ba41dd1ef82870aa1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4218ca7f21190e6b4ae4174967e71125d7d340469995b67ba41dd1ef82870aa1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4218ca7f21190e6b4ae4174967e71125d7d340469995b67ba41dd1ef82870aa1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4218ca7f21190e6b4ae4174967e71125d7d340469995b67ba41dd1ef82870aa1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:50 compute-0 podman[227891]: 2025-10-11 04:42:50.34796994 +0000 UTC m=+0.019923165 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:42:50 compute-0 podman[227891]: 2025-10-11 04:42:50.46680832 +0000 UTC m=+0.138761545 container init 2dc859b4437cfba18af46f4016529a4626d4c7414e22afdeac20f5f4bec1591b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:42:50 compute-0 podman[227891]: 2025-10-11 04:42:50.473395227 +0000 UTC m=+0.145348462 container start 2dc859b4437cfba18af46f4016529a4626d4c7414e22afdeac20f5f4bec1591b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bassi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:42:50 compute-0 podman[227891]: 2025-10-11 04:42:50.47707147 +0000 UTC m=+0.149024695 container attach 2dc859b4437cfba18af46f4016529a4626d4c7414e22afdeac20f5f4bec1591b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:42:50 compute-0 python3.9[227885]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:42:50 compute-0 sudo[227883]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:51 compute-0 sudo[228062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbiygzzmodsremjunsaqfgljufmcvndv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157770.6823447-273-254833410345712/AnsiballZ_stat.py'
Oct 11 04:42:51 compute-0 sudo[228062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:51 compute-0 python3.9[228064]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:42:51 compute-0 sudo[228062]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:51 compute-0 ceph-mon[74243]: pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:51 compute-0 priceless_bassi[227908]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:42:51 compute-0 priceless_bassi[227908]: --> relative data size: 1.0
Oct 11 04:42:51 compute-0 priceless_bassi[227908]: --> All data devices are unavailable
Oct 11 04:42:51 compute-0 systemd[1]: libpod-2dc859b4437cfba18af46f4016529a4626d4c7414e22afdeac20f5f4bec1591b.scope: Deactivated successfully.
Oct 11 04:42:51 compute-0 podman[227891]: 2025-10-11 04:42:51.549376756 +0000 UTC m=+1.221329991 container died 2dc859b4437cfba18af46f4016529a4626d4c7414e22afdeac20f5f4bec1591b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:42:51 compute-0 systemd[1]: libpod-2dc859b4437cfba18af46f4016529a4626d4c7414e22afdeac20f5f4bec1591b.scope: Consumed 1.014s CPU time.
Oct 11 04:42:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4218ca7f21190e6b4ae4174967e71125d7d340469995b67ba41dd1ef82870aa1-merged.mount: Deactivated successfully.
Oct 11 04:42:51 compute-0 podman[227891]: 2025-10-11 04:42:51.618911697 +0000 UTC m=+1.290864942 container remove 2dc859b4437cfba18af46f4016529a4626d4c7414e22afdeac20f5f4bec1591b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:42:51 compute-0 systemd[1]: libpod-conmon-2dc859b4437cfba18af46f4016529a4626d4c7414e22afdeac20f5f4bec1591b.scope: Deactivated successfully.
Oct 11 04:42:51 compute-0 sudo[227633]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:51 compute-0 sudo[228197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:42:51 compute-0 sudo[228197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:42:51 compute-0 sudo[228197]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:51 compute-0 sudo[228251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qopjhudukjvrzlqnldcdcojtrycyljrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157770.6823447-273-254833410345712/AnsiballZ_copy.py'
Oct 11 04:42:51 compute-0 sudo[228251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:51 compute-0 sudo[228248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:42:51 compute-0 sudo[228248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:42:51 compute-0 sudo[228248]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:51 compute-0 sudo[228276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:42:51 compute-0 sudo[228276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:42:51 compute-0 sudo[228276]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:51 compute-0 python3.9[228272]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760157770.6823447-273-254833410345712/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:42:51 compute-0 sudo[228251]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:51 compute-0 sudo[228301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:42:51 compute-0 sudo[228301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:42:52 compute-0 podman[228414]: 2025-10-11 04:42:52.379195042 +0000 UTC m=+0.044150429 container create 456b03142aea85dfb3b3a3fc55b6b0b6394669c2cb1b74f1318d10563035a93b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rhodes, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:42:52 compute-0 systemd[1]: Started libpod-conmon-456b03142aea85dfb3b3a3fc55b6b0b6394669c2cb1b74f1318d10563035a93b.scope.
Oct 11 04:42:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:42:52 compute-0 podman[228414]: 2025-10-11 04:42:52.452826397 +0000 UTC m=+0.117781824 container init 456b03142aea85dfb3b3a3fc55b6b0b6394669c2cb1b74f1318d10563035a93b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rhodes, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:42:52 compute-0 podman[228414]: 2025-10-11 04:42:52.360907459 +0000 UTC m=+0.025862876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:42:52 compute-0 podman[228414]: 2025-10-11 04:42:52.461121857 +0000 UTC m=+0.126077244 container start 456b03142aea85dfb3b3a3fc55b6b0b6394669c2cb1b74f1318d10563035a93b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rhodes, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:42:52 compute-0 podman[228414]: 2025-10-11 04:42:52.464454281 +0000 UTC m=+0.129409708 container attach 456b03142aea85dfb3b3a3fc55b6b0b6394669c2cb1b74f1318d10563035a93b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rhodes, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:42:52 compute-0 practical_rhodes[228459]: 167 167
Oct 11 04:42:52 compute-0 systemd[1]: libpod-456b03142aea85dfb3b3a3fc55b6b0b6394669c2cb1b74f1318d10563035a93b.scope: Deactivated successfully.
Oct 11 04:42:52 compute-0 podman[228414]: 2025-10-11 04:42:52.468846163 +0000 UTC m=+0.133801550 container died 456b03142aea85dfb3b3a3fc55b6b0b6394669c2cb1b74f1318d10563035a93b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rhodes, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:42:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b82b4c2940b51f32ab928e1cd278cb194a15b3399e1f5294be39ce3fc26249a-merged.mount: Deactivated successfully.
Oct 11 04:42:52 compute-0 podman[228414]: 2025-10-11 04:42:52.518513331 +0000 UTC m=+0.183468718 container remove 456b03142aea85dfb3b3a3fc55b6b0b6394669c2cb1b74f1318d10563035a93b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:42:52 compute-0 systemd[1]: libpod-conmon-456b03142aea85dfb3b3a3fc55b6b0b6394669c2cb1b74f1318d10563035a93b.scope: Deactivated successfully.
Oct 11 04:42:52 compute-0 sudo[228550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sporhckxkscfkorvwvocekviynwlnqrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157772.3062904-290-203221008203664/AnsiballZ_file.py'
Oct 11 04:42:52 compute-0 sudo[228550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:52 compute-0 podman[228558]: 2025-10-11 04:42:52.742400831 +0000 UTC m=+0.046462658 container create 5d856dafd83f75af40158b35deab215e6fc3800f18246f639ea4c414cacc3d34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:42:52 compute-0 systemd[1]: Started libpod-conmon-5d856dafd83f75af40158b35deab215e6fc3800f18246f639ea4c414cacc3d34.scope.
Oct 11 04:42:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d939af8ce4d869b37983592e61f4647cd946479f302d5fdb5fae056ecace15a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d939af8ce4d869b37983592e61f4647cd946479f302d5fdb5fae056ecace15a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:52 compute-0 podman[228558]: 2025-10-11 04:42:52.724263922 +0000 UTC m=+0.028325759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d939af8ce4d869b37983592e61f4647cd946479f302d5fdb5fae056ecace15a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d939af8ce4d869b37983592e61f4647cd946479f302d5fdb5fae056ecace15a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:52 compute-0 podman[228558]: 2025-10-11 04:42:52.835348235 +0000 UTC m=+0.139410112 container init 5d856dafd83f75af40158b35deab215e6fc3800f18246f639ea4c414cacc3d34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:42:52 compute-0 podman[228558]: 2025-10-11 04:42:52.847881682 +0000 UTC m=+0.151943539 container start 5d856dafd83f75af40158b35deab215e6fc3800f18246f639ea4c414cacc3d34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bassi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 04:42:52 compute-0 podman[228558]: 2025-10-11 04:42:52.85214689 +0000 UTC m=+0.156208787 container attach 5d856dafd83f75af40158b35deab215e6fc3800f18246f639ea4c414cacc3d34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bassi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:42:52 compute-0 python3.9[228554]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:42:52 compute-0 sudo[228550]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:53 compute-0 sudo[228729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zutyynqnhygsdhfcdyxnuzwhdwvrmyvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157773.1232374-298-117858808820325/AnsiballZ_stat.py'
Oct 11 04:42:53 compute-0 sudo[228729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:53 compute-0 ceph-mon[74243]: pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:53 compute-0 cool_bassi[228575]: {
Oct 11 04:42:53 compute-0 cool_bassi[228575]:     "0": [
Oct 11 04:42:53 compute-0 cool_bassi[228575]:         {
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "devices": [
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "/dev/loop3"
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             ],
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "lv_name": "ceph_lv0",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "lv_size": "21470642176",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "name": "ceph_lv0",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "tags": {
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.cluster_name": "ceph",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.crush_device_class": "",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.encrypted": "0",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.osd_id": "0",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.type": "block",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.vdo": "0"
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             },
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "type": "block",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "vg_name": "ceph_vg0"
Oct 11 04:42:53 compute-0 cool_bassi[228575]:         }
Oct 11 04:42:53 compute-0 cool_bassi[228575]:     ],
Oct 11 04:42:53 compute-0 cool_bassi[228575]:     "1": [
Oct 11 04:42:53 compute-0 cool_bassi[228575]:         {
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "devices": [
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "/dev/loop4"
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             ],
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "lv_name": "ceph_lv1",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "lv_size": "21470642176",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "name": "ceph_lv1",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "tags": {
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.cluster_name": "ceph",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.crush_device_class": "",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.encrypted": "0",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.osd_id": "1",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.type": "block",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.vdo": "0"
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             },
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "type": "block",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "vg_name": "ceph_vg1"
Oct 11 04:42:53 compute-0 cool_bassi[228575]:         }
Oct 11 04:42:53 compute-0 cool_bassi[228575]:     ],
Oct 11 04:42:53 compute-0 cool_bassi[228575]:     "2": [
Oct 11 04:42:53 compute-0 cool_bassi[228575]:         {
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "devices": [
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "/dev/loop5"
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             ],
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "lv_name": "ceph_lv2",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "lv_size": "21470642176",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "name": "ceph_lv2",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "tags": {
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.cluster_name": "ceph",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.crush_device_class": "",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.encrypted": "0",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.osd_id": "2",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.type": "block",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:                 "ceph.vdo": "0"
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             },
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "type": "block",
Oct 11 04:42:53 compute-0 cool_bassi[228575]:             "vg_name": "ceph_vg2"
Oct 11 04:42:53 compute-0 cool_bassi[228575]:         }
Oct 11 04:42:53 compute-0 cool_bassi[228575]:     ]
Oct 11 04:42:53 compute-0 cool_bassi[228575]: }
Oct 11 04:42:53 compute-0 systemd[1]: libpod-5d856dafd83f75af40158b35deab215e6fc3800f18246f639ea4c414cacc3d34.scope: Deactivated successfully.
Oct 11 04:42:53 compute-0 podman[228558]: 2025-10-11 04:42:53.682713295 +0000 UTC m=+0.986775122 container died 5d856dafd83f75af40158b35deab215e6fc3800f18246f639ea4c414cacc3d34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bassi, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:42:53 compute-0 python3.9[228731]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:42:53 compute-0 sudo[228729]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d939af8ce4d869b37983592e61f4647cd946479f302d5fdb5fae056ecace15a6-merged.mount: Deactivated successfully.
Oct 11 04:42:53 compute-0 podman[228558]: 2025-10-11 04:42:53.741641868 +0000 UTC m=+1.045703675 container remove 5d856dafd83f75af40158b35deab215e6fc3800f18246f639ea4c414cacc3d34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:42:53 compute-0 systemd[1]: libpod-conmon-5d856dafd83f75af40158b35deab215e6fc3800f18246f639ea4c414cacc3d34.scope: Deactivated successfully.
Oct 11 04:42:53 compute-0 sudo[228301]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:53 compute-0 sudo[228773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:42:53 compute-0 sudo[228773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:42:53 compute-0 sudo[228773]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:53 compute-0 sudo[228821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:42:53 compute-0 sudo[228821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:42:53 compute-0 sudo[228821]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:53 compute-0 sudo[228859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:42:53 compute-0 sudo[228859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:42:53 compute-0 sudo[228859]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:54 compute-0 sudo[228900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:42:54 compute-0 sudo[228900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:42:54 compute-0 sudo[228970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reejlxonrdxttyvemhhxklmqlvebklup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157773.1232374-298-117858808820325/AnsiballZ_copy.py'
Oct 11 04:42:54 compute-0 sudo[228970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:54 compute-0 python3.9[228972]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760157773.1232374-298-117858808820325/.source.json _original_basename=.6527ro__ follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:42:54 compute-0 sudo[228970]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:54 compute-0 podman[229014]: 2025-10-11 04:42:54.412491167 +0000 UTC m=+0.037439350 container create c4d5c59661bceec740d68c5767b1c4c4191f407a68a9dce0e1168ccae1a5ca9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:42:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:54 compute-0 systemd[1]: Started libpod-conmon-c4d5c59661bceec740d68c5767b1c4c4191f407a68a9dce0e1168ccae1a5ca9b.scope.
Oct 11 04:42:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:42:54 compute-0 podman[229014]: 2025-10-11 04:42:54.394389878 +0000 UTC m=+0.019338111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:42:54 compute-0 podman[229014]: 2025-10-11 04:42:54.497798787 +0000 UTC m=+0.122746980 container init c4d5c59661bceec740d68c5767b1c4c4191f407a68a9dce0e1168ccae1a5ca9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclaren, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:42:54 compute-0 sshd-session[226488]: Invalid user deploy from 221.159.21.170 port 47212
Oct 11 04:42:54 compute-0 podman[229014]: 2025-10-11 04:42:54.508894378 +0000 UTC m=+0.133842571 container start c4d5c59661bceec740d68c5767b1c4c4191f407a68a9dce0e1168ccae1a5ca9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclaren, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:42:54 compute-0 podman[229014]: 2025-10-11 04:42:54.512983972 +0000 UTC m=+0.137932195 container attach c4d5c59661bceec740d68c5767b1c4c4191f407a68a9dce0e1168ccae1a5ca9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclaren, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:42:54 compute-0 heuristic_mclaren[229055]: 167 167
Oct 11 04:42:54 compute-0 systemd[1]: libpod-c4d5c59661bceec740d68c5767b1c4c4191f407a68a9dce0e1168ccae1a5ca9b.scope: Deactivated successfully.
Oct 11 04:42:54 compute-0 podman[229014]: 2025-10-11 04:42:54.519071266 +0000 UTC m=+0.144019499 container died c4d5c59661bceec740d68c5767b1c4c4191f407a68a9dce0e1168ccae1a5ca9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclaren, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:42:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f189284276046a82af4919e7918288e049a648d82a56617cfadee217a6609262-merged.mount: Deactivated successfully.
Oct 11 04:42:54 compute-0 podman[229014]: 2025-10-11 04:42:54.574647473 +0000 UTC m=+0.199595696 container remove c4d5c59661bceec740d68c5767b1c4c4191f407a68a9dce0e1168ccae1a5ca9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclaren, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:42:54 compute-0 systemd[1]: libpod-conmon-c4d5c59661bceec740d68c5767b1c4c4191f407a68a9dce0e1168ccae1a5ca9b.scope: Deactivated successfully.
Oct 11 04:42:54 compute-0 podman[229130]: 2025-10-11 04:42:54.790308635 +0000 UTC m=+0.053260120 container create c15d525426228f76c91908416ffff717d1a1d6fb2bafbc9ea811220489557551 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shockley, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:42:54 compute-0 systemd[1]: Started libpod-conmon-c15d525426228f76c91908416ffff717d1a1d6fb2bafbc9ea811220489557551.scope.
Oct 11 04:42:54 compute-0 podman[229130]: 2025-10-11 04:42:54.767522988 +0000 UTC m=+0.030474503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:42:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:42:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce2ed313d876cb9fed6a3e9fe30867e8ac95c246bc478aff4ef4320348fde0e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce2ed313d876cb9fed6a3e9fe30867e8ac95c246bc478aff4ef4320348fde0e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce2ed313d876cb9fed6a3e9fe30867e8ac95c246bc478aff4ef4320348fde0e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce2ed313d876cb9fed6a3e9fe30867e8ac95c246bc478aff4ef4320348fde0e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:54 compute-0 podman[229130]: 2025-10-11 04:42:54.925491189 +0000 UTC m=+0.188442704 container init c15d525426228f76c91908416ffff717d1a1d6fb2bafbc9ea811220489557551 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shockley, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:42:54 compute-0 podman[229130]: 2025-10-11 04:42:54.937752829 +0000 UTC m=+0.200704344 container start c15d525426228f76c91908416ffff717d1a1d6fb2bafbc9ea811220489557551 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:42:54 compute-0 podman[229130]: 2025-10-11 04:42:54.942384047 +0000 UTC m=+0.205335572 container attach c15d525426228f76c91908416ffff717d1a1d6fb2bafbc9ea811220489557551 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shockley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:42:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:55 compute-0 sudo[229225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atiomoanxcemquxwdczlruavpeyqknga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157774.6037014-313-151763937249785/AnsiballZ_file.py'
Oct 11 04:42:55 compute-0 sudo[229225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:55 compute-0 python3.9[229227]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:42:55 compute-0 sudo[229225]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:55 compute-0 ceph-mon[74243]: pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:55 compute-0 sshd-session[226488]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 04:42:55 compute-0 sshd-session[226488]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.159.21.170
Oct 11 04:42:55 compute-0 sudo[229397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvopnyyluzxdbetfhptznrcglaeqdzfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157775.5230114-321-77637941884695/AnsiballZ_stat.py'
Oct 11 04:42:55 compute-0 sudo[229397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:55 compute-0 elated_shockley[229173]: {
Oct 11 04:42:55 compute-0 elated_shockley[229173]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:42:55 compute-0 elated_shockley[229173]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:42:55 compute-0 elated_shockley[229173]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:42:55 compute-0 elated_shockley[229173]:         "osd_id": 1,
Oct 11 04:42:55 compute-0 elated_shockley[229173]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:42:55 compute-0 elated_shockley[229173]:         "type": "bluestore"
Oct 11 04:42:55 compute-0 elated_shockley[229173]:     },
Oct 11 04:42:55 compute-0 elated_shockley[229173]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:42:55 compute-0 elated_shockley[229173]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:42:55 compute-0 elated_shockley[229173]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:42:55 compute-0 elated_shockley[229173]:         "osd_id": 0,
Oct 11 04:42:55 compute-0 elated_shockley[229173]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:42:55 compute-0 elated_shockley[229173]:         "type": "bluestore"
Oct 11 04:42:55 compute-0 elated_shockley[229173]:     },
Oct 11 04:42:55 compute-0 elated_shockley[229173]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:42:55 compute-0 elated_shockley[229173]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:42:55 compute-0 elated_shockley[229173]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:42:55 compute-0 elated_shockley[229173]:         "osd_id": 2,
Oct 11 04:42:55 compute-0 elated_shockley[229173]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:42:55 compute-0 elated_shockley[229173]:         "type": "bluestore"
Oct 11 04:42:55 compute-0 elated_shockley[229173]:     }
Oct 11 04:42:55 compute-0 elated_shockley[229173]: }
Oct 11 04:42:56 compute-0 systemd[1]: libpod-c15d525426228f76c91908416ffff717d1a1d6fb2bafbc9ea811220489557551.scope: Deactivated successfully.
Oct 11 04:42:56 compute-0 systemd[1]: libpod-c15d525426228f76c91908416ffff717d1a1d6fb2bafbc9ea811220489557551.scope: Consumed 1.100s CPU time.
Oct 11 04:42:56 compute-0 podman[229130]: 2025-10-11 04:42:56.036087876 +0000 UTC m=+1.299039391 container died c15d525426228f76c91908416ffff717d1a1d6fb2bafbc9ea811220489557551 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shockley, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:42:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce2ed313d876cb9fed6a3e9fe30867e8ac95c246bc478aff4ef4320348fde0e1-merged.mount: Deactivated successfully.
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:42:56
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['default.rgw.log', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'vms', '.rgw.root', '.mgr']
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:42:56 compute-0 podman[229130]: 2025-10-11 04:42:56.099110122 +0000 UTC m=+1.362061607 container remove c15d525426228f76c91908416ffff717d1a1d6fb2bafbc9ea811220489557551 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:42:56 compute-0 systemd[1]: libpod-conmon-c15d525426228f76c91908416ffff717d1a1d6fb2bafbc9ea811220489557551.scope: Deactivated successfully.
Oct 11 04:42:56 compute-0 sudo[228900]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:42:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:42:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:42:56 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 0e520f52-e564-4daf-9d83-f256f26891f3 does not exist
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev c66c6705-714b-4cdd-92f5-3299a948693e does not exist
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:42:56 compute-0 sudo[229397]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:56 compute-0 sudo[229420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:42:56 compute-0 sudo[229420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:42:56 compute-0 sudo[229420]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:42:56 compute-0 sudo[229468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:42:56 compute-0 sudo[229468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:42:56 compute-0 sudo[229468]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:56 compute-0 sudo[229590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enkmlabwtepmuqfaqkbhavymvshenmbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157775.5230114-321-77637941884695/AnsiballZ_copy.py'
Oct 11 04:42:56 compute-0 sudo[229590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:56 compute-0 sudo[229590]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:42:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:42:57 compute-0 ceph-mon[74243]: pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:57 compute-0 sshd-session[226488]: Failed password for invalid user deploy from 221.159.21.170 port 47212 ssh2
Oct 11 04:42:57 compute-0 sudo[229742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzraulyrsvvmdsripldjjfjxybrdfwwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157777.1990693-338-278314193068773/AnsiballZ_container_config_data.py'
Oct 11 04:42:57 compute-0 sudo[229742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:57 compute-0 python3.9[229744]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 11 04:42:57 compute-0 sudo[229742]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:58 compute-0 sshd-session[226488]: Connection closed by invalid user deploy 221.159.21.170 port 47212 [preauth]
Oct 11 04:42:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:58 compute-0 sudo[229896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fteyvgwherpwnpuzqjfohftetmmiwink ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157778.2443216-347-277349677196884/AnsiballZ_container_config_hash.py'
Oct 11 04:42:58 compute-0 sudo[229896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:58 compute-0 python3.9[229898]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 04:42:59 compute-0 sudo[229896]: pam_unix(sudo:session): session closed for user root
Oct 11 04:42:59 compute-0 ceph-mon[74243]: pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:59 compute-0 sudo[230048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hucfyztfkbbnsjnkzgnbzitrmtyhbvdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157779.3217733-356-68315819159402/AnsiballZ_podman_container_info.py'
Oct 11 04:42:59 compute-0 sudo[230048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:42:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:00 compute-0 python3.9[230050]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 11 04:43:00 compute-0 sudo[230048]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:01 compute-0 unix_chkpwd[230153]: password check failed for user (root)
Oct 11 04:43:01 compute-0 sshd-session[229834]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.159.21.170  user=root
Oct 11 04:43:01 compute-0 ceph-mon[74243]: pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:01 compute-0 sudo[230227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peaswgjeuukdsrwaoznkcaltksnvwnmz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760157781.0883062-369-16836389164909/AnsiballZ_edpm_container_manage.py'
Oct 11 04:43:01 compute-0 sudo[230227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:02 compute-0 python3[230229]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 04:43:02 compute-0 podman[230264]: 2025-10-11 04:43:02.360435424 +0000 UTC m=+0.076755454 container create 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS)
Oct 11 04:43:02 compute-0 podman[230264]: 2025-10-11 04:43:02.32314539 +0000 UTC m=+0.039465490 image pull 5773abc4300b61c01f3353a0b9239f9a404bb272790b280574e4c56f72edaa72 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 11 04:43:02 compute-0 python3[230229]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 11 04:43:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:02 compute-0 sudo[230227]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:03 compute-0 sudo[230452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coughwlgigjjdlrxthnlxmsahgehheag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157782.7892473-377-102691116840833/AnsiballZ_stat.py'
Oct 11 04:43:03 compute-0 sudo[230452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:03 compute-0 python3.9[230454]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:43:03 compute-0 sshd-session[229834]: Failed password for root from 221.159.21.170 port 49974 ssh2
Oct 11 04:43:03 compute-0 sudo[230452]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:03 compute-0 ceph-mon[74243]: pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:03 compute-0 sshd-session[229834]: Connection closed by authenticating user root 221.159.21.170 port 49974 [preauth]
Oct 11 04:43:04 compute-0 sudo[230606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyfjhzpqfqokdkwzqldwqjiilroawrhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157783.742211-386-156748947856591/AnsiballZ_file.py'
Oct 11 04:43:04 compute-0 sudo[230606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:04 compute-0 python3.9[230609]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:04 compute-0 sudo[230606]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:04 compute-0 sudo[230684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vowfvzbpxfdjeikgnbdeufqjxrolqrso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157783.742211-386-156748947856591/AnsiballZ_stat.py'
Oct 11 04:43:04 compute-0 sudo[230684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:04 compute-0 python3.9[230686]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:43:04 compute-0 sudo[230684]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:04 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct 11 04:43:04 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:43:04.967974) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:43:04 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct 11 04:43:04 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157784968032, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1737, "num_deletes": 250, "total_data_size": 2919222, "memory_usage": 2952088, "flush_reason": "Manual Compaction"}
Oct 11 04:43:04 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct 11 04:43:04 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157784975641, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1649684, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11702, "largest_seqno": 13438, "table_properties": {"data_size": 1643965, "index_size": 2858, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14277, "raw_average_key_size": 20, "raw_value_size": 1631370, "raw_average_value_size": 2297, "num_data_blocks": 132, "num_entries": 710, "num_filter_entries": 710, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760157588, "oldest_key_time": 1760157588, "file_creation_time": 1760157784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:43:04 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 7700 microseconds, and 4040 cpu microseconds.
Oct 11 04:43:04 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:43:04 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:43:04.975685) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1649684 bytes OK
Oct 11 04:43:04 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:43:04.975702) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct 11 04:43:04 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:43:04.977102) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct 11 04:43:04 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:43:04.977114) EVENT_LOG_v1 {"time_micros": 1760157784977110, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:43:04 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:43:04.977128) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:43:04 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2911829, prev total WAL file size 2911829, number of live WAL files 2.
Oct 11 04:43:04 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:43:04 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:43:04.977820) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Oct 11 04:43:04 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:43:04 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1611KB)], [29(7789KB)]
Oct 11 04:43:04 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157784977847, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9626334, "oldest_snapshot_seqno": -1}
Oct 11 04:43:05 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3968 keys, 7549174 bytes, temperature: kUnknown
Oct 11 04:43:05 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157785020490, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7549174, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7520871, "index_size": 17313, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9925, "raw_key_size": 94450, "raw_average_key_size": 23, "raw_value_size": 7447525, "raw_average_value_size": 1876, "num_data_blocks": 755, "num_entries": 3968, "num_filter_entries": 3968, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156706, "oldest_key_time": 0, "file_creation_time": 1760157784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:43:05 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:43:05 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:43:05.021406) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7549174 bytes
Oct 11 04:43:05 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:43:05.022996) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.3 rd, 176.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.6 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(10.4) write-amplify(4.6) OK, records in: 4387, records dropped: 419 output_compression: NoCompression
Oct 11 04:43:05 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:43:05.023032) EVENT_LOG_v1 {"time_micros": 1760157785023015, "job": 12, "event": "compaction_finished", "compaction_time_micros": 42732, "compaction_time_cpu_micros": 16187, "output_level": 6, "num_output_files": 1, "total_output_size": 7549174, "num_input_records": 4387, "num_output_records": 3968, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:43:05 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:43:05 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157785023850, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct 11 04:43:05 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:43:05 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157785027085, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct 11 04:43:05 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:43:04.977764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:43:05 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:43:05.027171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:43:05 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:43:05.027178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:43:05 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:43:05.027181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:43:05 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:43:05.027184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:43:05 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:43:05.027187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:43:05 compute-0 ceph-mon[74243]: pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:05 compute-0 sudo[230835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwxhwewtlpnexcqegqdjucvoglggoqui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157784.985495-386-175083206133645/AnsiballZ_copy.py'
Oct 11 04:43:05 compute-0 sudo[230835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:05 compute-0 python3.9[230837]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760157784.985495-386-175083206133645/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:05 compute-0 sudo[230835]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:06 compute-0 sudo[230911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhmkhokcfvuortdgkabtwnaoggzuuqvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157784.985495-386-175083206133645/AnsiballZ_systemd.py'
Oct 11 04:43:06 compute-0 sudo[230911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:43:06 compute-0 python3.9[230913]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 04:43:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:06 compute-0 systemd[1]: Reloading.
Oct 11 04:43:06 compute-0 systemd-sysv-generator[230942]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:43:06 compute-0 systemd-rc-local-generator[230939]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:43:06 compute-0 sudo[230911]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:07 compute-0 sudo[231022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gixtwbipktybvbyaqrsxgcitusrpvlsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157784.985495-386-175083206133645/AnsiballZ_systemd.py'
Oct 11 04:43:07 compute-0 sudo[231022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:07 compute-0 ceph-mon[74243]: pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:07 compute-0 python3.9[231024]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:43:07 compute-0 systemd[1]: Reloading.
Oct 11 04:43:07 compute-0 systemd-rc-local-generator[231052]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:43:07 compute-0 systemd-sysv-generator[231056]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:43:07 compute-0 sshd-session[230607]: Invalid user minecraft from 221.159.21.170 port 50952
Oct 11 04:43:08 compute-0 systemd[1]: Starting iscsid container...
Oct 11 04:43:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd13b17fa4dbb28d52d76b0b0c1903f19211f63afac21ffde5712559f7b9097/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd13b17fa4dbb28d52d76b0b0c1903f19211f63afac21ffde5712559f7b9097/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd13b17fa4dbb28d52d76b0b0c1903f19211f63afac21ffde5712559f7b9097/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:08 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0.
Oct 11 04:43:08 compute-0 podman[231063]: 2025-10-11 04:43:08.170815777 +0000 UTC m=+0.144993443 container init 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:43:08 compute-0 iscsid[231079]: + sudo -E kolla_set_configs
Oct 11 04:43:08 compute-0 podman[231063]: 2025-10-11 04:43:08.200789606 +0000 UTC m=+0.174967262 container start 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 11 04:43:08 compute-0 sudo[231085]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 11 04:43:08 compute-0 podman[231063]: iscsid
Oct 11 04:43:08 compute-0 systemd[1]: Started iscsid container.
Oct 11 04:43:08 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 11 04:43:08 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 11 04:43:08 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 11 04:43:08 compute-0 sudo[231022]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:08 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 11 04:43:08 compute-0 systemd[231098]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 11 04:43:08 compute-0 podman[231086]: 2025-10-11 04:43:08.322233842 +0000 UTC m=+0.101959073 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:43:08 compute-0 systemd[1]: 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0-efbba597114e7b7.service: Main process exited, code=exited, status=1/FAILURE
Oct 11 04:43:08 compute-0 systemd[1]: 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0-efbba597114e7b7.service: Failed with result 'exit-code'.
Oct 11 04:43:08 compute-0 sshd-session[230607]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 04:43:08 compute-0 sshd-session[230607]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.159.21.170
Oct 11 04:43:08 compute-0 systemd[231098]: Queued start job for default target Main User Target.
Oct 11 04:43:08 compute-0 systemd[231098]: Created slice User Application Slice.
Oct 11 04:43:08 compute-0 systemd[231098]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 11 04:43:08 compute-0 systemd[231098]: Started Daily Cleanup of User's Temporary Directories.
Oct 11 04:43:08 compute-0 systemd[231098]: Reached target Paths.
Oct 11 04:43:08 compute-0 systemd[231098]: Reached target Timers.
Oct 11 04:43:08 compute-0 systemd[231098]: Starting D-Bus User Message Bus Socket...
Oct 11 04:43:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:08 compute-0 systemd[231098]: Starting Create User's Volatile Files and Directories...
Oct 11 04:43:08 compute-0 systemd[231098]: Finished Create User's Volatile Files and Directories.
Oct 11 04:43:08 compute-0 systemd[231098]: Listening on D-Bus User Message Bus Socket.
Oct 11 04:43:08 compute-0 systemd[231098]: Reached target Sockets.
Oct 11 04:43:08 compute-0 systemd[231098]: Reached target Basic System.
Oct 11 04:43:08 compute-0 systemd[231098]: Reached target Main User Target.
Oct 11 04:43:08 compute-0 systemd[231098]: Startup finished in 156ms.
Oct 11 04:43:08 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 11 04:43:08 compute-0 systemd[1]: Started Session c3 of User root.
Oct 11 04:43:08 compute-0 sudo[231085]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 11 04:43:08 compute-0 iscsid[231079]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 04:43:08 compute-0 iscsid[231079]: INFO:__main__:Validating config file
Oct 11 04:43:08 compute-0 iscsid[231079]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 04:43:08 compute-0 iscsid[231079]: INFO:__main__:Writing out command to execute
Oct 11 04:43:08 compute-0 sudo[231085]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:08 compute-0 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 11 04:43:08 compute-0 iscsid[231079]: ++ cat /run_command
Oct 11 04:43:08 compute-0 iscsid[231079]: + CMD='/usr/sbin/iscsid -f'
Oct 11 04:43:08 compute-0 iscsid[231079]: + ARGS=
Oct 11 04:43:08 compute-0 iscsid[231079]: + sudo kolla_copy_cacerts
Oct 11 04:43:08 compute-0 sudo[231174]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 11 04:43:08 compute-0 systemd[1]: Started Session c4 of User root.
Oct 11 04:43:08 compute-0 sudo[231174]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 11 04:43:08 compute-0 sudo[231174]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:08 compute-0 iscsid[231079]: + [[ ! -n '' ]]
Oct 11 04:43:08 compute-0 iscsid[231079]: + . kolla_extend_start
Oct 11 04:43:08 compute-0 iscsid[231079]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 11 04:43:08 compute-0 iscsid[231079]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 11 04:43:08 compute-0 iscsid[231079]: Running command: '/usr/sbin/iscsid -f'
Oct 11 04:43:08 compute-0 iscsid[231079]: + umask 0022
Oct 11 04:43:08 compute-0 iscsid[231079]: + exec /usr/sbin/iscsid -f
Oct 11 04:43:08 compute-0 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 11 04:43:08 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Oct 11 04:43:08 compute-0 podman[231176]: 2025-10-11 04:43:08.724054077 +0000 UTC m=+0.140533220 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:43:09 compute-0 python3.9[231312]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:43:09 compute-0 ceph-mon[74243]: pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:09 compute-0 sudo[231462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxwackcdmhzbxkwwmroerebpfxkgkxmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157789.319769-423-244108901103529/AnsiballZ_file.py'
Oct 11 04:43:09 compute-0 sudo[231462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:09 compute-0 python3.9[231464]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:09 compute-0 sudo[231462]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:09 compute-0 sshd-session[230607]: Failed password for invalid user minecraft from 221.159.21.170 port 50952 ssh2
Oct 11 04:43:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:10 compute-0 sshd-session[230607]: Connection closed by invalid user minecraft 221.159.21.170 port 50952 [preauth]
Oct 11 04:43:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:10 compute-0 sudo[231616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhhihwisogyistwgztqnzakrgynsnfpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157790.472223-434-118085978149158/AnsiballZ_service_facts.py'
Oct 11 04:43:10 compute-0 sudo[231616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:43:11.005 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:43:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:43:11.007 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:43:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:43:11.007 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:43:11 compute-0 python3.9[231618]: ansible-ansible.builtin.service_facts Invoked
Oct 11 04:43:11 compute-0 network[231635]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 04:43:11 compute-0 network[231636]: 'network-scripts' will be removed from distribution in near future.
Oct 11 04:43:11 compute-0 network[231637]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 04:43:11 compute-0 ceph-mon[74243]: pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:13 compute-0 ceph-mon[74243]: pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:14 compute-0 podman[231688]: 2025-10-11 04:43:14.437830422 +0000 UTC m=+0.086275376 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 04:43:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:15 compute-0 ceph-mon[74243]: pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:15 compute-0 unix_chkpwd[231737]: password check failed for user (root)
Oct 11 04:43:15 compute-0 sshd-session[231541]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.159.21.170  user=root
Oct 11 04:43:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:17 compute-0 ceph-mon[74243]: pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:17 compute-0 sshd-session[231541]: Failed password for root from 221.159.21.170 port 52160 ssh2
Oct 11 04:43:18 compute-0 sudo[231616]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:18 compute-0 sshd-session[231541]: Connection closed by authenticating user root 221.159.21.170 port 52160 [preauth]
Oct 11 04:43:18 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 11 04:43:18 compute-0 systemd[231098]: Activating special unit Exit the Session...
Oct 11 04:43:18 compute-0 systemd[231098]: Stopped target Main User Target.
Oct 11 04:43:18 compute-0 systemd[231098]: Stopped target Basic System.
Oct 11 04:43:18 compute-0 systemd[231098]: Stopped target Paths.
Oct 11 04:43:18 compute-0 systemd[231098]: Stopped target Sockets.
Oct 11 04:43:18 compute-0 systemd[231098]: Stopped target Timers.
Oct 11 04:43:18 compute-0 systemd[231098]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 11 04:43:18 compute-0 systemd[231098]: Closed D-Bus User Message Bus Socket.
Oct 11 04:43:18 compute-0 systemd[231098]: Stopped Create User's Volatile Files and Directories.
Oct 11 04:43:18 compute-0 systemd[231098]: Removed slice User Application Slice.
Oct 11 04:43:18 compute-0 systemd[231098]: Reached target Shutdown.
Oct 11 04:43:18 compute-0 systemd[231098]: Finished Exit the Session.
Oct 11 04:43:18 compute-0 systemd[231098]: Reached target Exit the Session.
Oct 11 04:43:18 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 11 04:43:18 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 11 04:43:18 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 11 04:43:18 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 11 04:43:18 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 11 04:43:18 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 11 04:43:18 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 11 04:43:18 compute-0 sudo[231934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esxailatufekmdykweygbocrqbrtishz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157798.5529559-444-220245556694752/AnsiballZ_file.py'
Oct 11 04:43:18 compute-0 sudo[231934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:19 compute-0 python3.9[231936]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 11 04:43:19 compute-0 sudo[231934]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:19 compute-0 ceph-mon[74243]: pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:20 compute-0 sudo[232086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mljyfjsryrgjanggpspalfqsstvstwml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157799.482617-452-57215985823698/AnsiballZ_modprobe.py'
Oct 11 04:43:20 compute-0 sudo[232086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:20 compute-0 python3.9[232088]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 11 04:43:20 compute-0 sudo[232086]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:20 compute-0 sudo[232242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhrgeowtoxwwygpczhcfkooehatgextf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157800.537435-460-190656139988418/AnsiballZ_stat.py'
Oct 11 04:43:20 compute-0 sudo[232242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:21 compute-0 python3.9[232244]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:43:21 compute-0 sudo[232242]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:21 compute-0 sshd-session[231870]: Invalid user devuser from 221.159.21.170 port 53620
Oct 11 04:43:21 compute-0 ceph-mon[74243]: pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:21 compute-0 sudo[232365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrhxqxwfcviygtbjwowsmduawamynxcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157800.537435-460-190656139988418/AnsiballZ_copy.py'
Oct 11 04:43:21 compute-0 sudo[232365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:21 compute-0 sshd-session[231870]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 04:43:21 compute-0 sshd-session[231870]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.159.21.170
Oct 11 04:43:21 compute-0 python3.9[232367]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760157800.537435-460-190656139988418/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:21 compute-0 sudo[232365]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:22 compute-0 sudo[232517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-notuerqwmnujtbzrigfgtkkpimsozhtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157802.1436307-476-234071893438663/AnsiballZ_lineinfile.py'
Oct 11 04:43:22 compute-0 sudo[232517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:22 compute-0 python3.9[232519]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:22 compute-0 sudo[232517]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:23 compute-0 sudo[232669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tntusjecgypzhpprwhwbdpalowuvaxoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157802.910438-484-105301287360591/AnsiballZ_systemd.py'
Oct 11 04:43:23 compute-0 sudo[232669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:23 compute-0 sshd-session[231870]: Failed password for invalid user devuser from 221.159.21.170 port 53620 ssh2
Oct 11 04:43:23 compute-0 ceph-mon[74243]: pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:23 compute-0 python3.9[232671]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:43:23 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 11 04:43:23 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 11 04:43:23 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 11 04:43:23 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 11 04:43:23 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 11 04:43:23 compute-0 sudo[232669]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:24 compute-0 sudo[232825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsnrbppwsfnkcokbwgfrnajafulfiurj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157803.9529703-492-132901693104010/AnsiballZ_file.py'
Oct 11 04:43:24 compute-0 sudo[232825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:24 compute-0 python3.9[232827]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:43:24 compute-0 sudo[232825]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:24 compute-0 sshd-session[231870]: Connection closed by invalid user devuser 221.159.21.170 port 53620 [preauth]
Oct 11 04:43:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:25 compute-0 sudo[232979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlkcrzbdyqoeltlvisazzyenrxbvwhzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157804.831883-501-215035263939232/AnsiballZ_stat.py'
Oct 11 04:43:25 compute-0 sudo[232979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:25 compute-0 python3.9[232981]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:43:25 compute-0 sudo[232979]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:25 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 11 04:43:25 compute-0 ceph-mon[74243]: pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:26 compute-0 sudo[233132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcrrvalhiqpleiugmjjfoonvsvjxcuin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157805.6739748-510-112471311041670/AnsiballZ_stat.py'
Oct 11 04:43:26 compute-0 sudo[233132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:43:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:43:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:43:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:43:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:43:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:43:26 compute-0 python3.9[233134]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:43:26 compute-0 sudo[233132]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:26 compute-0 ceph-mon[74243]: pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:26 compute-0 sshd-session[232858]: Invalid user odoo from 221.159.21.170 port 54674
Oct 11 04:43:26 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 11 04:43:26 compute-0 sudo[233285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygfvepxyoytpvxedhqnsaokzcuwypxkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157806.4587777-518-22018260382280/AnsiballZ_stat.py'
Oct 11 04:43:26 compute-0 sudo[233285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:27 compute-0 python3.9[233287]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:43:27 compute-0 sudo[233285]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:27 compute-0 sshd-session[232858]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 04:43:27 compute-0 sshd-session[232858]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.159.21.170
Oct 11 04:43:27 compute-0 sudo[233408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfszapuysymrkjuydtqgozwwrfmojnut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157806.4587777-518-22018260382280/AnsiballZ_copy.py'
Oct 11 04:43:27 compute-0 sudo[233408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:27 compute-0 python3.9[233410]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760157806.4587777-518-22018260382280/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:27 compute-0 sudo[233408]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:28 compute-0 sudo[233560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmqhfaumuxzytylowmirgcxdxttidcct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157808.0363076-533-38575405480554/AnsiballZ_command.py'
Oct 11 04:43:28 compute-0 sudo[233560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:28 compute-0 python3.9[233562]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:43:28 compute-0 sudo[233560]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:29 compute-0 sshd-session[232858]: Failed password for invalid user odoo from 221.159.21.170 port 54674 ssh2
Oct 11 04:43:29 compute-0 sudo[233713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkcwafadahesobmpmyzkdsliuxoqfqqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157809.0244927-541-50314812065322/AnsiballZ_lineinfile.py'
Oct 11 04:43:29 compute-0 sudo[233713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:29 compute-0 ceph-mon[74243]: pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:29 compute-0 python3.9[233715]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:29 compute-0 sudo[233713]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:29 compute-0 sshd-session[232858]: Connection closed by invalid user odoo 221.159.21.170 port 54674 [preauth]
Oct 11 04:43:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:30 compute-0 sudo[233867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnulgdcpiblbxhqtuwjzljniofkqneku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157809.862294-549-92190122872108/AnsiballZ_replace.py'
Oct 11 04:43:30 compute-0 sudo[233867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:30 compute-0 python3.9[233869]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:30 compute-0 sudo[233867]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:31 compute-0 sudo[234019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrxogpcmalkxgnfkzizpgrfygcsmktmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157810.7962375-557-144539685039446/AnsiballZ_replace.py'
Oct 11 04:43:31 compute-0 sudo[234019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:31 compute-0 python3.9[234021]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:31 compute-0 sudo[234019]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:31 compute-0 ceph-mon[74243]: pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:32 compute-0 sudo[234171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekitfcxtrmkpnrravwhxpadnmfvnhokm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157811.6411734-566-65827594476842/AnsiballZ_lineinfile.py'
Oct 11 04:43:32 compute-0 sudo[234171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:32 compute-0 python3.9[234173]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:32 compute-0 sudo[234171]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:32 compute-0 unix_chkpwd[234198]: password check failed for user (root)
Oct 11 04:43:32 compute-0 sshd-session[233792]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.159.21.170  user=root
Oct 11 04:43:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:32 compute-0 sudo[234324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpligmovwnwiymfkjerupbtmgkslvsed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157812.5119898-566-15093542616358/AnsiballZ_lineinfile.py'
Oct 11 04:43:32 compute-0 sudo[234324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:33 compute-0 python3.9[234326]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:33 compute-0 sudo[234324]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:33 compute-0 ceph-mon[74243]: pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:33 compute-0 sudo[234476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atjdbrowrxvhhamouopgxiexbooihdec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157813.2864907-566-271257329893038/AnsiballZ_lineinfile.py'
Oct 11 04:43:33 compute-0 sudo[234476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:33 compute-0 python3.9[234478]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:33 compute-0 sudo[234476]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:33 compute-0 sshd-session[233792]: Failed password for root from 221.159.21.170 port 55566 ssh2
Oct 11 04:43:34 compute-0 sudo[234628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prlkjspiotpxcerivbydnqdjghywubrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157814.0153584-566-108510701902245/AnsiballZ_lineinfile.py'
Oct 11 04:43:34 compute-0 sudo[234628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:34 compute-0 python3.9[234630]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:34 compute-0 sudo[234628]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:34 compute-0 sshd-session[233792]: Connection closed by authenticating user root 221.159.21.170 port 55566 [preauth]
Oct 11 04:43:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:35 compute-0 sudo[234780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygevkajmapzevrbbpzxnocfjfvrtphev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157814.7033129-595-14286526542194/AnsiballZ_stat.py'
Oct 11 04:43:35 compute-0 sudo[234780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:35 compute-0 python3.9[234782]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:43:35 compute-0 sudo[234780]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:35 compute-0 ceph-mon[74243]: pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:35 compute-0 sudo[234936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpfsrjckufrwrqzizolqpzcyeinrsevs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157815.4752443-603-187364453643297/AnsiballZ_file.py'
Oct 11 04:43:35 compute-0 sudo[234936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:35 compute-0 python3.9[234938]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:36 compute-0 sudo[234936]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:36 compute-0 sudo[235088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlillmcudnrvsdfqknanvgqioeomyunl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157816.2866776-612-205241283203976/AnsiballZ_file.py'
Oct 11 04:43:36 compute-0 sudo[235088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:36 compute-0 python3.9[235090]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:43:36 compute-0 sudo[235088]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:37 compute-0 sudo[235240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgryvpotoxhbohypxeqvdhvlpnurxryo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157817.0787914-620-164654949145181/AnsiballZ_stat.py'
Oct 11 04:43:37 compute-0 sudo[235240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:37 compute-0 ceph-mon[74243]: pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:37 compute-0 python3.9[235242]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:43:37 compute-0 sudo[235240]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:37 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 11 04:43:37 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 11 04:43:37 compute-0 sudo[235320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhhorewprvythfjtbdjrpkzssrfcsmqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157817.0787914-620-164654949145181/AnsiballZ_file.py'
Oct 11 04:43:37 compute-0 sudo[235320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:38 compute-0 python3.9[235322]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:43:38 compute-0 sudo[235320]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:38 compute-0 podman[235446]: 2025-10-11 04:43:38.774377871 +0000 UTC m=+0.068977918 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:43:38 compute-0 sudo[235488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igublpniwljruexpxoraldjapghaxplq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157818.363534-620-127935458124679/AnsiballZ_stat.py'
Oct 11 04:43:38 compute-0 sudo[235488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:38 compute-0 podman[235492]: 2025-10-11 04:43:38.965538602 +0000 UTC m=+0.143290160 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:43:39 compute-0 python3.9[235493]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:43:39 compute-0 sudo[235488]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:39 compute-0 sudo[235594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jozhktctokhujkkuvkqdcxqreuibifkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157818.363534-620-127935458124679/AnsiballZ_file.py'
Oct 11 04:43:39 compute-0 sudo[235594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:39 compute-0 ceph-mon[74243]: pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:39 compute-0 python3.9[235596]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:43:39 compute-0 sudo[235594]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:40 compute-0 sudo[235746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yloxqfjixsvdvdnfhttmpkyynexwxzsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157819.8528125-643-224996703807195/AnsiballZ_file.py'
Oct 11 04:43:40 compute-0 sudo[235746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:40 compute-0 python3.9[235748]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:40 compute-0 sudo[235746]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:40 compute-0 sudo[235898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwoeoevpvrrqzbpxrdoqxmwshluxndkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157820.653642-651-254468400665549/AnsiballZ_stat.py'
Oct 11 04:43:40 compute-0 sudo[235898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:41 compute-0 python3.9[235900]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:43:41 compute-0 sudo[235898]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:41 compute-0 sudo[235976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcciupfewiauszotjjcyftuscbfgvwai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157820.653642-651-254468400665549/AnsiballZ_file.py'
Oct 11 04:43:41 compute-0 sudo[235976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:41 compute-0 ceph-mon[74243]: pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:41 compute-0 python3.9[235978]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:41 compute-0 sudo[235976]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:42 compute-0 sudo[236128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgkaplnbnegrqalpssmowbdcmidarfgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157822.1366875-663-39598208907643/AnsiballZ_stat.py'
Oct 11 04:43:42 compute-0 sudo[236128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:42 compute-0 python3.9[236130]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:43:42 compute-0 sudo[236128]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:43 compute-0 sudo[236206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fogaznirleshkidsojfevisdrnncxolc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157822.1366875-663-39598208907643/AnsiballZ_file.py'
Oct 11 04:43:43 compute-0 sudo[236206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:43 compute-0 python3.9[236208]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:43 compute-0 sudo[236206]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:43 compute-0 ceph-mon[74243]: pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:43 compute-0 sudo[236358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xisnjcabbglgvjucjwhdjyqlxftsklqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157823.520317-675-11226457454893/AnsiballZ_systemd.py'
Oct 11 04:43:43 compute-0 sudo[236358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:44 compute-0 python3.9[236360]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:43:44 compute-0 systemd[1]: Reloading.
Oct 11 04:43:44 compute-0 systemd-rc-local-generator[236389]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:43:44 compute-0 systemd-sysv-generator[236392]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:43:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:44 compute-0 sudo[236358]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:44 compute-0 podman[236398]: 2025-10-11 04:43:44.666975704 +0000 UTC m=+0.082394468 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:43:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:45 compute-0 sudo[236567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmxcqansbmghvjqfohusxxukarxddpdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157824.8728118-683-130895067762541/AnsiballZ_stat.py'
Oct 11 04:43:45 compute-0 sudo[236567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:45 compute-0 python3.9[236569]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:43:45 compute-0 sudo[236567]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:45 compute-0 ceph-mon[74243]: pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:45 compute-0 sudo[236645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxoctuhoiavbyvwjqakesoaivkfgqmiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157824.8728118-683-130895067762541/AnsiballZ_file.py'
Oct 11 04:43:45 compute-0 sudo[236645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:46 compute-0 python3.9[236647]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:46 compute-0 sudo[236645]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:46 compute-0 sudo[236797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrqptpryyjqctrvayklnqgvqxyqkshwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157826.2667437-695-95970897155287/AnsiballZ_stat.py'
Oct 11 04:43:46 compute-0 sudo[236797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:46 compute-0 python3.9[236799]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:43:46 compute-0 sudo[236797]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:47 compute-0 sudo[236875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdkomnijhehrzvvqetrobliaafvuauxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157826.2667437-695-95970897155287/AnsiballZ_file.py'
Oct 11 04:43:47 compute-0 sudo[236875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:47 compute-0 python3.9[236877]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:47 compute-0 sudo[236875]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:47 compute-0 ceph-mon[74243]: pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:48 compute-0 sudo[237027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mowzectdlnstfircrduljonstswxdcby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157827.6541183-707-32213079838763/AnsiballZ_systemd.py'
Oct 11 04:43:48 compute-0 sudo[237027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:48 compute-0 python3.9[237029]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:43:48 compute-0 systemd[1]: Reloading.
Oct 11 04:43:48 compute-0 systemd-sysv-generator[237061]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:43:48 compute-0 systemd-rc-local-generator[237057]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:43:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:48 compute-0 systemd[1]: Starting Create netns directory...
Oct 11 04:43:48 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 04:43:48 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 04:43:48 compute-0 systemd[1]: Finished Create netns directory.
Oct 11 04:43:48 compute-0 sudo[237027]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:49 compute-0 sudo[237219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbgkslvmagqovonalrpysrwxpgsmqzap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157829.1368577-717-161132908325478/AnsiballZ_file.py'
Oct 11 04:43:49 compute-0 sudo[237219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:49 compute-0 ceph-mon[74243]: pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:49 compute-0 python3.9[237221]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:43:49 compute-0 sudo[237219]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:50 compute-0 sudo[237371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzplsywldrehvdcpyvpvjfowkqjqvjne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157829.9057875-725-105636156500061/AnsiballZ_stat.py'
Oct 11 04:43:50 compute-0 sudo[237371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:50 compute-0 python3.9[237373]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:43:50 compute-0 sudo[237371]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:50 compute-0 ceph-mon[74243]: pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:50 compute-0 sudo[237494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-filuonvfvicpinqunmjxaizmgcwxokew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157829.9057875-725-105636156500061/AnsiballZ_copy.py'
Oct 11 04:43:50 compute-0 sudo[237494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:51 compute-0 python3.9[237496]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760157829.9057875-725-105636156500061/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:43:51 compute-0 sudo[237494]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:51 compute-0 sudo[237646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jekdgdaelahokqqyzbygvmbmnqopufno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157831.564747-742-121967389416358/AnsiballZ_file.py'
Oct 11 04:43:51 compute-0 sudo[237646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:52 compute-0 python3.9[237648]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:43:52 compute-0 sudo[237646]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:52 compute-0 sudo[237798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxdttkgnchkfrdgvbkghtkvtnxrobnmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157832.3502316-750-31059859903322/AnsiballZ_stat.py'
Oct 11 04:43:52 compute-0 sudo[237798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:52 compute-0 python3.9[237800]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:43:52 compute-0 sudo[237798]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:53 compute-0 sudo[237921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xckqbhzlsjmmdzjfsdhwxrnvgeufbrfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157832.3502316-750-31059859903322/AnsiballZ_copy.py'
Oct 11 04:43:53 compute-0 sudo[237921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:53 compute-0 ceph-mon[74243]: pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:53 compute-0 python3.9[237923]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760157832.3502316-750-31059859903322/.source.json _original_basename=.vpascrh0 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:53 compute-0 sudo[237921]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:54 compute-0 sudo[238073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oainvwhwjngryonjycvfjfnzhwsmhonc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157833.7578814-765-41650289171751/AnsiballZ_file.py'
Oct 11 04:43:54 compute-0 sudo[238073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:54 compute-0 python3.9[238075]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:43:54 compute-0 sudo[238073]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:54 compute-0 sudo[238225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsyxdadhysrfahdhczjkihrtvqsvqlbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157834.502549-773-106712153635515/AnsiballZ_stat.py'
Oct 11 04:43:54 compute-0 sudo[238225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:55 compute-0 sudo[238225]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:55 compute-0 sudo[238348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvdhkhglhnvgwslrkhrxqnvzabfhmlvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157834.502549-773-106712153635515/AnsiballZ_copy.py'
Oct 11 04:43:55 compute-0 sudo[238348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:55 compute-0 ceph-mon[74243]: pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:55 compute-0 sudo[238348]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:43:56
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['volumes', '.rgw.root', 'vms', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'backups', 'default.rgw.meta']
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:43:56 compute-0 sudo[238450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:43:56 compute-0 sudo[238450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:43:56 compute-0 sudo[238450]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:56 compute-0 sudo[238487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:43:56 compute-0 sudo[238487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:43:56 compute-0 sudo[238487]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:56 compute-0 sudo[238562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgahkktoeuclqxwbjmxetgfristqqiko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157836.1108963-790-143281165131841/AnsiballZ_container_config_data.py'
Oct 11 04:43:56 compute-0 sudo[238562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:56 compute-0 sudo[238547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:43:56 compute-0 sudo[238547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:43:56 compute-0 sudo[238547]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:56 compute-0 sudo[238578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:43:56 compute-0 sudo[238578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:43:56 compute-0 python3.9[238575]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 11 04:43:56 compute-0 sudo[238562]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:57 compute-0 sudo[238578]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:43:57 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:43:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:43:57 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:43:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:43:57 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:43:57 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 1a947512-5ffe-4293-9c07-0aaade7a220f does not exist
Oct 11 04:43:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:43:57 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:43:57 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev f8953a03-edc0-4e98-a256-330e3a9cfb71 does not exist
Oct 11 04:43:57 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev d87ac9fc-31be-4706-b87f-aef3e3a95387 does not exist
Oct 11 04:43:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:43:57 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:43:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:43:57 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:43:57 compute-0 sudo[238733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:43:57 compute-0 sudo[238733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:43:57 compute-0 sudo[238733]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:57 compute-0 sudo[238758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:43:57 compute-0 sudo[238758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:43:57 compute-0 sudo[238758]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:57 compute-0 sudo[238807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:43:57 compute-0 sudo[238807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:43:57 compute-0 sudo[238807]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:57 compute-0 sudo[238857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgsakvwegudpelmfiuibvffpnkdytnyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157836.9855115-799-214196369379150/AnsiballZ_container_config_hash.py'
Oct 11 04:43:57 compute-0 sudo[238857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:57 compute-0 sudo[238859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:43:57 compute-0 sudo[238859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:43:57 compute-0 python3.9[238866]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 04:43:57 compute-0 sudo[238857]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:57 compute-0 ceph-mon[74243]: pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:43:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:43:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:43:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:43:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:43:57 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:43:57 compute-0 podman[238950]: 2025-10-11 04:43:57.710442141 +0000 UTC m=+0.058191186 container create 9a6968156645cb212004344666e40bd8c19d2be0ec558edb44dafb55b2d3aa5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:43:57 compute-0 systemd[1]: Started libpod-conmon-9a6968156645cb212004344666e40bd8c19d2be0ec558edb44dafb55b2d3aa5a.scope.
Oct 11 04:43:57 compute-0 podman[238950]: 2025-10-11 04:43:57.680786159 +0000 UTC m=+0.028535234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:43:57 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:43:57 compute-0 podman[238950]: 2025-10-11 04:43:57.808566187 +0000 UTC m=+0.156315242 container init 9a6968156645cb212004344666e40bd8c19d2be0ec558edb44dafb55b2d3aa5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:43:57 compute-0 podman[238950]: 2025-10-11 04:43:57.819156115 +0000 UTC m=+0.166905130 container start 9a6968156645cb212004344666e40bd8c19d2be0ec558edb44dafb55b2d3aa5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_brahmagupta, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:43:57 compute-0 podman[238950]: 2025-10-11 04:43:57.82289821 +0000 UTC m=+0.170647225 container attach 9a6968156645cb212004344666e40bd8c19d2be0ec558edb44dafb55b2d3aa5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:43:57 compute-0 practical_brahmagupta[238979]: 167 167
Oct 11 04:43:57 compute-0 systemd[1]: libpod-9a6968156645cb212004344666e40bd8c19d2be0ec558edb44dafb55b2d3aa5a.scope: Deactivated successfully.
Oct 11 04:43:57 compute-0 podman[238950]: 2025-10-11 04:43:57.827286951 +0000 UTC m=+0.175035996 container died 9a6968156645cb212004344666e40bd8c19d2be0ec558edb44dafb55b2d3aa5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_brahmagupta, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:43:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bac7a6ca53a54e3caf8c706dc60a5a7ad431d1d59601e0dfe82884d03fdd4de-merged.mount: Deactivated successfully.
Oct 11 04:43:57 compute-0 podman[238950]: 2025-10-11 04:43:57.889905857 +0000 UTC m=+0.237654892 container remove 9a6968156645cb212004344666e40bd8c19d2be0ec558edb44dafb55b2d3aa5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:43:57 compute-0 systemd[1]: libpod-conmon-9a6968156645cb212004344666e40bd8c19d2be0ec558edb44dafb55b2d3aa5a.scope: Deactivated successfully.
Oct 11 04:43:58 compute-0 sudo[239128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtiyztodrzvlfllhfhltdgnzybseplhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157837.790082-808-265811731640/AnsiballZ_podman_container_info.py'
Oct 11 04:43:58 compute-0 sudo[239128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:58 compute-0 podman[239089]: 2025-10-11 04:43:58.127429425 +0000 UTC m=+0.069569784 container create 99ad5cbc26d9aee62f5ee21cf561de66b268a9a7c13c14472ed05bfc9f6db659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lovelace, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:43:58 compute-0 systemd[1]: Started libpod-conmon-99ad5cbc26d9aee62f5ee21cf561de66b268a9a7c13c14472ed05bfc9f6db659.scope.
Oct 11 04:43:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d926e4dfa54f6d87fdbfe79b698aac01b7749fb29e30cb8350b389520daecdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d926e4dfa54f6d87fdbfe79b698aac01b7749fb29e30cb8350b389520daecdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d926e4dfa54f6d87fdbfe79b698aac01b7749fb29e30cb8350b389520daecdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d926e4dfa54f6d87fdbfe79b698aac01b7749fb29e30cb8350b389520daecdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d926e4dfa54f6d87fdbfe79b698aac01b7749fb29e30cb8350b389520daecdd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:58 compute-0 podman[239089]: 2025-10-11 04:43:58.105478179 +0000 UTC m=+0.047618598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:43:58 compute-0 podman[239089]: 2025-10-11 04:43:58.208499019 +0000 UTC m=+0.150639408 container init 99ad5cbc26d9aee62f5ee21cf561de66b268a9a7c13c14472ed05bfc9f6db659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:43:58 compute-0 podman[239089]: 2025-10-11 04:43:58.217277821 +0000 UTC m=+0.159418190 container start 99ad5cbc26d9aee62f5ee21cf561de66b268a9a7c13c14472ed05bfc9f6db659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:43:58 compute-0 podman[239089]: 2025-10-11 04:43:58.220854672 +0000 UTC m=+0.162995041 container attach 99ad5cbc26d9aee62f5ee21cf561de66b268a9a7c13c14472ed05bfc9f6db659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lovelace, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:43:58 compute-0 python3.9[239130]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 11 04:43:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:58 compute-0 sudo[239128]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:59 compute-0 pedantic_lovelace[239135]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:43:59 compute-0 pedantic_lovelace[239135]: --> relative data size: 1.0
Oct 11 04:43:59 compute-0 pedantic_lovelace[239135]: --> All data devices are unavailable
Oct 11 04:43:59 compute-0 systemd[1]: libpod-99ad5cbc26d9aee62f5ee21cf561de66b268a9a7c13c14472ed05bfc9f6db659.scope: Deactivated successfully.
Oct 11 04:43:59 compute-0 podman[239089]: 2025-10-11 04:43:59.152135754 +0000 UTC m=+1.094276153 container died 99ad5cbc26d9aee62f5ee21cf561de66b268a9a7c13c14472ed05bfc9f6db659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lovelace, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:43:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d926e4dfa54f6d87fdbfe79b698aac01b7749fb29e30cb8350b389520daecdd-merged.mount: Deactivated successfully.
Oct 11 04:43:59 compute-0 podman[239089]: 2025-10-11 04:43:59.221947983 +0000 UTC m=+1.164088362 container remove 99ad5cbc26d9aee62f5ee21cf561de66b268a9a7c13c14472ed05bfc9f6db659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:43:59 compute-0 systemd[1]: libpod-conmon-99ad5cbc26d9aee62f5ee21cf561de66b268a9a7c13c14472ed05bfc9f6db659.scope: Deactivated successfully.
Oct 11 04:43:59 compute-0 sudo[238859]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:59 compute-0 sudo[239226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:43:59 compute-0 sudo[239226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:43:59 compute-0 sudo[239226]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:59 compute-0 sudo[239279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:43:59 compute-0 sudo[239279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:43:59 compute-0 sudo[239279]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:59 compute-0 sudo[239331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:43:59 compute-0 sudo[239331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:43:59 compute-0 sudo[239331]: pam_unix(sudo:session): session closed for user root
Oct 11 04:43:59 compute-0 ceph-mon[74243]: pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:59 compute-0 sudo[239375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:43:59 compute-0 sudo[239375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:43:59 compute-0 sudo[239450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyczkgfmukllwwubdqjcecjnvqggpazj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760157839.3220415-821-275214955173488/AnsiballZ_edpm_container_manage.py'
Oct 11 04:43:59 compute-0 sudo[239450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:43:59 compute-0 python3[239452]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 04:43:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:00 compute-0 podman[239503]: 2025-10-11 04:44:00.004121529 +0000 UTC m=+0.045542605 container create 0bf52cf6c32dd76fcbc41d43a1bd581b08746a50b13bad6fdad9fd949ad741b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_darwin, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 04:44:00 compute-0 systemd[1]: Started libpod-conmon-0bf52cf6c32dd76fcbc41d43a1bd581b08746a50b13bad6fdad9fd949ad741b3.scope.
Oct 11 04:44:00 compute-0 podman[239503]: 2025-10-11 04:43:59.984263606 +0000 UTC m=+0.025684692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:44:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:44:00 compute-0 podman[239503]: 2025-10-11 04:44:00.100114791 +0000 UTC m=+0.141535877 container init 0bf52cf6c32dd76fcbc41d43a1bd581b08746a50b13bad6fdad9fd949ad741b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:44:00 compute-0 podman[239503]: 2025-10-11 04:44:00.106296457 +0000 UTC m=+0.147717523 container start 0bf52cf6c32dd76fcbc41d43a1bd581b08746a50b13bad6fdad9fd949ad741b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_darwin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:44:00 compute-0 podman[239503]: 2025-10-11 04:44:00.109313044 +0000 UTC m=+0.150734110 container attach 0bf52cf6c32dd76fcbc41d43a1bd581b08746a50b13bad6fdad9fd949ad741b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_darwin, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:44:00 compute-0 zealous_darwin[239529]: 167 167
Oct 11 04:44:00 compute-0 systemd[1]: libpod-0bf52cf6c32dd76fcbc41d43a1bd581b08746a50b13bad6fdad9fd949ad741b3.scope: Deactivated successfully.
Oct 11 04:44:00 compute-0 conmon[239529]: conmon 0bf52cf6c32dd76fcbc4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0bf52cf6c32dd76fcbc41d43a1bd581b08746a50b13bad6fdad9fd949ad741b3.scope/container/memory.events
Oct 11 04:44:00 compute-0 podman[239503]: 2025-10-11 04:44:00.113461849 +0000 UTC m=+0.154882925 container died 0bf52cf6c32dd76fcbc41d43a1bd581b08746a50b13bad6fdad9fd949ad741b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:44:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-173d9f9713f6bfad70e38f4bf111f76744ec778cc650c5f7daee423c32dd8730-merged.mount: Deactivated successfully.
Oct 11 04:44:00 compute-0 podman[239503]: 2025-10-11 04:44:00.152551209 +0000 UTC m=+0.193972315 container remove 0bf52cf6c32dd76fcbc41d43a1bd581b08746a50b13bad6fdad9fd949ad741b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_darwin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:44:00 compute-0 systemd[1]: libpod-conmon-0bf52cf6c32dd76fcbc41d43a1bd581b08746a50b13bad6fdad9fd949ad741b3.scope: Deactivated successfully.
Oct 11 04:44:00 compute-0 podman[239556]: 2025-10-11 04:44:00.384948517 +0000 UTC m=+0.057819346 container create 502b2746b44b15b6ac9947615e01f033d281cb7a46e6bb5d7fc01e2213ef1c1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_heisenberg, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:44:00 compute-0 systemd[1]: Started libpod-conmon-502b2746b44b15b6ac9947615e01f033d281cb7a46e6bb5d7fc01e2213ef1c1a.scope.
Oct 11 04:44:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a72a5e42d0f116032770a1620dc43563006d668ac9a5334fee027411de78b47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:00 compute-0 podman[239556]: 2025-10-11 04:44:00.364854598 +0000 UTC m=+0.037725457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a72a5e42d0f116032770a1620dc43563006d668ac9a5334fee027411de78b47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a72a5e42d0f116032770a1620dc43563006d668ac9a5334fee027411de78b47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a72a5e42d0f116032770a1620dc43563006d668ac9a5334fee027411de78b47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:00 compute-0 podman[239556]: 2025-10-11 04:44:00.478882586 +0000 UTC m=+0.151753496 container init 502b2746b44b15b6ac9947615e01f033d281cb7a46e6bb5d7fc01e2213ef1c1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:44:00 compute-0 podman[239556]: 2025-10-11 04:44:00.486881069 +0000 UTC m=+0.159751928 container start 502b2746b44b15b6ac9947615e01f033d281cb7a46e6bb5d7fc01e2213ef1c1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_heisenberg, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:44:00 compute-0 podman[239556]: 2025-10-11 04:44:00.491202389 +0000 UTC m=+0.164073288 container attach 502b2746b44b15b6ac9947615e01f033d281cb7a46e6bb5d7fc01e2213ef1c1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_heisenberg, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:44:01 compute-0 podman[239510]: 2025-10-11 04:44:01.241936928 +0000 UTC m=+1.265661716 image pull afce23cfe475a7c4b16d233ab936a7b07069ccb13842b1c95ba43e4b3f92adfb quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]: {
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:     "0": [
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:         {
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "devices": [
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "/dev/loop3"
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             ],
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "lv_name": "ceph_lv0",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "lv_size": "21470642176",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "name": "ceph_lv0",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "tags": {
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.cluster_name": "ceph",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.crush_device_class": "",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.encrypted": "0",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.osd_id": "0",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.type": "block",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.vdo": "0"
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             },
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "type": "block",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "vg_name": "ceph_vg0"
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:         }
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:     ],
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:     "1": [
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:         {
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "devices": [
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "/dev/loop4"
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             ],
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "lv_name": "ceph_lv1",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "lv_size": "21470642176",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "name": "ceph_lv1",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "tags": {
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.cluster_name": "ceph",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.crush_device_class": "",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.encrypted": "0",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.osd_id": "1",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.type": "block",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.vdo": "0"
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             },
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "type": "block",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "vg_name": "ceph_vg1"
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:         }
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:     ],
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:     "2": [
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:         {
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "devices": [
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "/dev/loop5"
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             ],
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "lv_name": "ceph_lv2",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "lv_size": "21470642176",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "name": "ceph_lv2",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "tags": {
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.cluster_name": "ceph",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.crush_device_class": "",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.encrypted": "0",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.osd_id": "2",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.type": "block",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:                 "ceph.vdo": "0"
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             },
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "type": "block",
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:             "vg_name": "ceph_vg2"
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:         }
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]:     ]
Oct 11 04:44:01 compute-0 optimistic_heisenberg[239573]: }
Oct 11 04:44:01 compute-0 systemd[1]: libpod-502b2746b44b15b6ac9947615e01f033d281cb7a46e6bb5d7fc01e2213ef1c1a.scope: Deactivated successfully.
Oct 11 04:44:01 compute-0 podman[239556]: 2025-10-11 04:44:01.327385623 +0000 UTC m=+1.000256512 container died 502b2746b44b15b6ac9947615e01f033d281cb7a46e6bb5d7fc01e2213ef1c1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:44:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a72a5e42d0f116032770a1620dc43563006d668ac9a5334fee027411de78b47-merged.mount: Deactivated successfully.
Oct 11 04:44:01 compute-0 podman[239556]: 2025-10-11 04:44:01.391973479 +0000 UTC m=+1.064844298 container remove 502b2746b44b15b6ac9947615e01f033d281cb7a46e6bb5d7fc01e2213ef1c1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_heisenberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:44:01 compute-0 systemd[1]: libpod-conmon-502b2746b44b15b6ac9947615e01f033d281cb7a46e6bb5d7fc01e2213ef1c1a.scope: Deactivated successfully.
Oct 11 04:44:01 compute-0 sudo[239375]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:01 compute-0 podman[239633]: 2025-10-11 04:44:01.422188005 +0000 UTC m=+0.058120464 container create 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:44:01 compute-0 podman[239633]: 2025-10-11 04:44:01.391533708 +0000 UTC m=+0.027466257 image pull afce23cfe475a7c4b16d233ab936a7b07069ccb13842b1c95ba43e4b3f92adfb quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 11 04:44:01 compute-0 python3[239452]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 11 04:44:01 compute-0 sudo[239650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:44:01 compute-0 sudo[239650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:44:01 compute-0 sudo[239650]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:01 compute-0 sudo[239689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:44:01 compute-0 sudo[239689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:44:01 compute-0 sudo[239689]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:01 compute-0 ceph-mon[74243]: pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:01 compute-0 sudo[239450]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:01 compute-0 sudo[239726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:44:01 compute-0 sudo[239726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:44:01 compute-0 sudo[239726]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:01 compute-0 sudo[239756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:44:01 compute-0 sudo[239756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:44:02 compute-0 podman[239914]: 2025-10-11 04:44:02.022875903 +0000 UTC m=+0.043345339 container create 1cf5a240b7baa6e724fabc4406a5a7f1a965fb7eb888f17cd15d6870144b7678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_pike, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:44:02 compute-0 systemd[1]: Started libpod-conmon-1cf5a240b7baa6e724fabc4406a5a7f1a965fb7eb888f17cd15d6870144b7678.scope.
Oct 11 04:44:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:44:02 compute-0 podman[239914]: 2025-10-11 04:44:02.001407499 +0000 UTC m=+0.021876965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:44:02 compute-0 podman[239914]: 2025-10-11 04:44:02.107699982 +0000 UTC m=+0.128169488 container init 1cf5a240b7baa6e724fabc4406a5a7f1a965fb7eb888f17cd15d6870144b7678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 04:44:02 compute-0 podman[239914]: 2025-10-11 04:44:02.114637588 +0000 UTC m=+0.135107014 container start 1cf5a240b7baa6e724fabc4406a5a7f1a965fb7eb888f17cd15d6870144b7678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_pike, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:44:02 compute-0 podman[239914]: 2025-10-11 04:44:02.118486855 +0000 UTC m=+0.138956381 container attach 1cf5a240b7baa6e724fabc4406a5a7f1a965fb7eb888f17cd15d6870144b7678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:44:02 compute-0 interesting_pike[239930]: 167 167
Oct 11 04:44:02 compute-0 systemd[1]: libpod-1cf5a240b7baa6e724fabc4406a5a7f1a965fb7eb888f17cd15d6870144b7678.scope: Deactivated successfully.
Oct 11 04:44:02 compute-0 podman[239914]: 2025-10-11 04:44:02.120196078 +0000 UTC m=+0.140665534 container died 1cf5a240b7baa6e724fabc4406a5a7f1a965fb7eb888f17cd15d6870144b7678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_pike, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:44:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fb03798545ba543457e79c49094b69fc6a299253702efadbed172e7f385ac1f-merged.mount: Deactivated successfully.
Oct 11 04:44:02 compute-0 podman[239914]: 2025-10-11 04:44:02.169025384 +0000 UTC m=+0.189494810 container remove 1cf5a240b7baa6e724fabc4406a5a7f1a965fb7eb888f17cd15d6870144b7678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:44:02 compute-0 systemd[1]: libpod-conmon-1cf5a240b7baa6e724fabc4406a5a7f1a965fb7eb888f17cd15d6870144b7678.scope: Deactivated successfully.
Oct 11 04:44:02 compute-0 sudo[240002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cllenmdevzrtmmhhttcoszyisxeknipy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157841.7885592-829-101758184284427/AnsiballZ_stat.py'
Oct 11 04:44:02 compute-0 sudo[240002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:02 compute-0 podman[240007]: 2025-10-11 04:44:02.343101615 +0000 UTC m=+0.048560692 container create 081359ad6a8f652d03f8d7ef28b349b6b38167fcb2433b486f04bc5836252092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hopper, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:44:02 compute-0 systemd[1]: Started libpod-conmon-081359ad6a8f652d03f8d7ef28b349b6b38167fcb2433b486f04bc5836252092.scope.
Oct 11 04:44:02 compute-0 podman[240007]: 2025-10-11 04:44:02.319924457 +0000 UTC m=+0.025383634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:44:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:44:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c885876748d1fa408c1e922e15d6abfa2bfebdb232b033fd1f80d3313d8979b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c885876748d1fa408c1e922e15d6abfa2bfebdb232b033fd1f80d3313d8979b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c885876748d1fa408c1e922e15d6abfa2bfebdb232b033fd1f80d3313d8979b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c885876748d1fa408c1e922e15d6abfa2bfebdb232b033fd1f80d3313d8979b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:02 compute-0 podman[240007]: 2025-10-11 04:44:02.442181105 +0000 UTC m=+0.147640222 container init 081359ad6a8f652d03f8d7ef28b349b6b38167fcb2433b486f04bc5836252092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hopper, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:44:02 compute-0 podman[240007]: 2025-10-11 04:44:02.454810455 +0000 UTC m=+0.160269522 container start 081359ad6a8f652d03f8d7ef28b349b6b38167fcb2433b486f04bc5836252092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hopper, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:44:02 compute-0 podman[240007]: 2025-10-11 04:44:02.459502903 +0000 UTC m=+0.164961970 container attach 081359ad6a8f652d03f8d7ef28b349b6b38167fcb2433b486f04bc5836252092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hopper, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:44:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:02 compute-0 python3.9[240020]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:44:02 compute-0 sudo[240002]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:03 compute-0 sudo[240183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuerrdfgurxhnsakhwyawgmpqqvbhdmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157842.768137-838-24914221901043/AnsiballZ_file.py'
Oct 11 04:44:03 compute-0 sudo[240183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:03 compute-0 python3.9[240187]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:03 compute-0 sudo[240183]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:03 compute-0 nervous_hopper[240025]: {
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:         "osd_id": 1,
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:         "type": "bluestore"
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:     },
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:         "osd_id": 0,
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:         "type": "bluestore"
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:     },
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:         "osd_id": 2,
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:         "type": "bluestore"
Oct 11 04:44:03 compute-0 nervous_hopper[240025]:     }
Oct 11 04:44:03 compute-0 nervous_hopper[240025]: }
Oct 11 04:44:03 compute-0 systemd[1]: libpod-081359ad6a8f652d03f8d7ef28b349b6b38167fcb2433b486f04bc5836252092.scope: Deactivated successfully.
Oct 11 04:44:03 compute-0 podman[240007]: 2025-10-11 04:44:03.4505067 +0000 UTC m=+1.155965807 container died 081359ad6a8f652d03f8d7ef28b349b6b38167fcb2433b486f04bc5836252092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hopper, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:44:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c885876748d1fa408c1e922e15d6abfa2bfebdb232b033fd1f80d3313d8979b-merged.mount: Deactivated successfully.
Oct 11 04:44:03 compute-0 podman[240007]: 2025-10-11 04:44:03.516378558 +0000 UTC m=+1.221837615 container remove 081359ad6a8f652d03f8d7ef28b349b6b38167fcb2433b486f04bc5836252092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hopper, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:44:03 compute-0 systemd[1]: libpod-conmon-081359ad6a8f652d03f8d7ef28b349b6b38167fcb2433b486f04bc5836252092.scope: Deactivated successfully.
Oct 11 04:44:03 compute-0 sudo[239756]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:44:03 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:44:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:44:03 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:44:03 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 8372e2d3-f5bb-494a-883c-037d69b0c5f6 does not exist
Oct 11 04:44:03 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 7098c3ab-69c9-461d-89a4-41a939133e49 does not exist
Oct 11 04:44:03 compute-0 ceph-mon[74243]: pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:03 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:44:03 compute-0 sudo[240274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:44:03 compute-0 sudo[240320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrijacdbelvschoribemcsxflnehpwqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157842.768137-838-24914221901043/AnsiballZ_stat.py'
Oct 11 04:44:03 compute-0 sudo[240320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:03 compute-0 sudo[240274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:44:03 compute-0 sudo[240274]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:03 compute-0 sudo[240325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:44:03 compute-0 sudo[240325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:44:03 compute-0 sudo[240325]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:03 compute-0 python3.9[240323]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:44:03 compute-0 sudo[240320]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:04 compute-0 sudo[240498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbhoveaykbiafysjhfnkaoxtckvajtyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157843.887553-838-184517563237941/AnsiballZ_copy.py'
Oct 11 04:44:04 compute-0 sudo[240498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:04 compute-0 python3.9[240500]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760157843.887553-838-184517563237941/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:04 compute-0 sudo[240498]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:04 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:44:04 compute-0 sudo[240574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssdcqjzglulqsmtyrzwxqhavqsznrtel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157843.887553-838-184517563237941/AnsiballZ_systemd.py'
Oct 11 04:44:04 compute-0 sudo[240574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:05 compute-0 python3.9[240576]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 04:44:05 compute-0 systemd[1]: Reloading.
Oct 11 04:44:05 compute-0 systemd-sysv-generator[240605]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:44:05 compute-0 systemd-rc-local-generator[240600]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:44:05 compute-0 ceph-mon[74243]: pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:05 compute-0 sudo[240574]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:05 compute-0 sudo[240685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfofekulsblxhqkrzijcsfanursoiefi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157843.887553-838-184517563237941/AnsiballZ_systemd.py'
Oct 11 04:44:05 compute-0 sudo[240685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:44:06 compute-0 python3.9[240687]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:44:06 compute-0 systemd[1]: Reloading.
Oct 11 04:44:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:06 compute-0 systemd-rc-local-generator[240717]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:44:06 compute-0 systemd-sysv-generator[240721]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:44:06 compute-0 systemd[1]: Starting multipathd container...
Oct 11 04:44:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d5f30ebed31274341bd8d9ee55af19e2630ee5938ecb16855e41ca0c8d5c8fe/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d5f30ebed31274341bd8d9ee55af19e2630ee5938ecb16855e41ca0c8d5c8fe/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:06 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967.
Oct 11 04:44:06 compute-0 podman[240726]: 2025-10-11 04:44:06.925542017 +0000 UTC m=+0.155690875 container init 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 04:44:06 compute-0 multipathd[240741]: + sudo -E kolla_set_configs
Oct 11 04:44:06 compute-0 podman[240726]: 2025-10-11 04:44:06.963256693 +0000 UTC m=+0.193405511 container start 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:44:06 compute-0 podman[240726]: multipathd
Oct 11 04:44:06 compute-0 sudo[240747]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 11 04:44:06 compute-0 sudo[240747]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 11 04:44:06 compute-0 sudo[240747]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 11 04:44:06 compute-0 systemd[1]: Started multipathd container.
Oct 11 04:44:07 compute-0 sudo[240685]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:07 compute-0 multipathd[240741]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 04:44:07 compute-0 multipathd[240741]: INFO:__main__:Validating config file
Oct 11 04:44:07 compute-0 multipathd[240741]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 04:44:07 compute-0 multipathd[240741]: INFO:__main__:Writing out command to execute
Oct 11 04:44:07 compute-0 sudo[240747]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:07 compute-0 multipathd[240741]: ++ cat /run_command
Oct 11 04:44:07 compute-0 multipathd[240741]: + CMD='/usr/sbin/multipathd -d'
Oct 11 04:44:07 compute-0 multipathd[240741]: + ARGS=
Oct 11 04:44:07 compute-0 multipathd[240741]: + sudo kolla_copy_cacerts
Oct 11 04:44:07 compute-0 podman[240748]: 2025-10-11 04:44:07.074884541 +0000 UTC m=+0.091106480 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 04:44:07 compute-0 sudo[240769]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 11 04:44:07 compute-0 systemd[1]: 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967-21ea533a06d150a.service: Main process exited, code=exited, status=1/FAILURE
Oct 11 04:44:07 compute-0 systemd[1]: 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967-21ea533a06d150a.service: Failed with result 'exit-code'.
Oct 11 04:44:07 compute-0 sudo[240769]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 11 04:44:07 compute-0 sudo[240769]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 11 04:44:07 compute-0 sudo[240769]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:07 compute-0 multipathd[240741]: + [[ ! -n '' ]]
Oct 11 04:44:07 compute-0 multipathd[240741]: + . kolla_extend_start
Oct 11 04:44:07 compute-0 multipathd[240741]: Running command: '/usr/sbin/multipathd -d'
Oct 11 04:44:07 compute-0 multipathd[240741]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 11 04:44:07 compute-0 multipathd[240741]: + umask 0022
Oct 11 04:44:07 compute-0 multipathd[240741]: + exec /usr/sbin/multipathd -d
Oct 11 04:44:07 compute-0 multipathd[240741]: 4027.361651 | --------start up--------
Oct 11 04:44:07 compute-0 multipathd[240741]: 4027.361673 | read /etc/multipath.conf
Oct 11 04:44:07 compute-0 multipathd[240741]: 4027.367911 | path checkers start up
Oct 11 04:44:07 compute-0 ceph-mon[74243]: pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:07 compute-0 python3.9[240928]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:44:08 compute-0 sudo[241080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxdjvlkvxtbcuhteasmmjorozssduwnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157848.0683317-874-154222938864243/AnsiballZ_command.py'
Oct 11 04:44:08 compute-0 sudo[241080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:08 compute-0 python3.9[241082]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:44:08 compute-0 sudo[241080]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:09 compute-0 podman[241220]: 2025-10-11 04:44:09.441213129 +0000 UTC m=+0.081727211 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, managed_by=edpm_ansible)
Oct 11 04:44:09 compute-0 sudo[241275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqoyjtrtgvjnxqhjylsximezhxddrngd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157849.0413222-882-179929598663029/AnsiballZ_systemd.py'
Oct 11 04:44:09 compute-0 sudo[241275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:09 compute-0 podman[241215]: 2025-10-11 04:44:09.469357902 +0000 UTC m=+0.117302843 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 11 04:44:09 compute-0 ceph-mon[74243]: pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:09 compute-0 python3.9[241289]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:44:09 compute-0 systemd[1]: Stopping multipathd container...
Oct 11 04:44:09 compute-0 multipathd[240741]: 4030.139824 | exit (signal)
Oct 11 04:44:09 compute-0 multipathd[240741]: 4030.139980 | --------shut down-------
Oct 11 04:44:09 compute-0 systemd[1]: libpod-981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967.scope: Deactivated successfully.
Oct 11 04:44:09 compute-0 podman[241295]: 2025-10-11 04:44:09.922565254 +0000 UTC m=+0.069193984 container died 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 11 04:44:09 compute-0 systemd[1]: 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967-21ea533a06d150a.timer: Deactivated successfully.
Oct 11 04:44:09 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967.
Oct 11 04:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967-userdata-shm.mount: Deactivated successfully.
Oct 11 04:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d5f30ebed31274341bd8d9ee55af19e2630ee5938ecb16855e41ca0c8d5c8fe-merged.mount: Deactivated successfully.
Oct 11 04:44:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:10 compute-0 podman[241295]: 2025-10-11 04:44:10.061790641 +0000 UTC m=+0.208419411 container cleanup 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 11 04:44:10 compute-0 podman[241295]: multipathd
Oct 11 04:44:10 compute-0 podman[241324]: multipathd
Oct 11 04:44:10 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 11 04:44:10 compute-0 systemd[1]: Stopped multipathd container.
Oct 11 04:44:10 compute-0 systemd[1]: Starting multipathd container...
Oct 11 04:44:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d5f30ebed31274341bd8d9ee55af19e2630ee5938ecb16855e41ca0c8d5c8fe/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d5f30ebed31274341bd8d9ee55af19e2630ee5938ecb16855e41ca0c8d5c8fe/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:10 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967.
Oct 11 04:44:10 compute-0 podman[241337]: 2025-10-11 04:44:10.332758126 +0000 UTC m=+0.136950930 container init 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 04:44:10 compute-0 multipathd[241352]: + sudo -E kolla_set_configs
Oct 11 04:44:10 compute-0 podman[241337]: 2025-10-11 04:44:10.36805359 +0000 UTC m=+0.172246324 container start 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:44:10 compute-0 podman[241337]: multipathd
Oct 11 04:44:10 compute-0 sudo[241358]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 11 04:44:10 compute-0 sudo[241358]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 11 04:44:10 compute-0 sudo[241358]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 11 04:44:10 compute-0 systemd[1]: Started multipathd container.
Oct 11 04:44:10 compute-0 sudo[241275]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:10 compute-0 multipathd[241352]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 04:44:10 compute-0 multipathd[241352]: INFO:__main__:Validating config file
Oct 11 04:44:10 compute-0 multipathd[241352]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 04:44:10 compute-0 multipathd[241352]: INFO:__main__:Writing out command to execute
Oct 11 04:44:10 compute-0 sudo[241358]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:10 compute-0 multipathd[241352]: ++ cat /run_command
Oct 11 04:44:10 compute-0 multipathd[241352]: + CMD='/usr/sbin/multipathd -d'
Oct 11 04:44:10 compute-0 multipathd[241352]: + ARGS=
Oct 11 04:44:10 compute-0 multipathd[241352]: + sudo kolla_copy_cacerts
Oct 11 04:44:10 compute-0 sudo[241380]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 11 04:44:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:10 compute-0 sudo[241380]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 11 04:44:10 compute-0 sudo[241380]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 11 04:44:10 compute-0 sudo[241380]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:10 compute-0 multipathd[241352]: + [[ ! -n '' ]]
Oct 11 04:44:10 compute-0 multipathd[241352]: + . kolla_extend_start
Oct 11 04:44:10 compute-0 multipathd[241352]: Running command: '/usr/sbin/multipathd -d'
Oct 11 04:44:10 compute-0 multipathd[241352]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 11 04:44:10 compute-0 multipathd[241352]: + umask 0022
Oct 11 04:44:10 compute-0 multipathd[241352]: + exec /usr/sbin/multipathd -d
Oct 11 04:44:10 compute-0 podman[241359]: 2025-10-11 04:44:10.480496609 +0000 UTC m=+0.090559455 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251009, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:44:10 compute-0 systemd[1]: 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967-370e522a05a3e43b.service: Main process exited, code=exited, status=1/FAILURE
Oct 11 04:44:10 compute-0 systemd[1]: 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967-370e522a05a3e43b.service: Failed with result 'exit-code'.
Oct 11 04:44:10 compute-0 multipathd[241352]: 4030.746452 | --------start up--------
Oct 11 04:44:10 compute-0 multipathd[241352]: 4030.746473 | read /etc/multipath.conf
Oct 11 04:44:10 compute-0 multipathd[241352]: 4030.753034 | path checkers start up
Oct 11 04:44:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:44:11.006 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:44:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:44:11.007 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:44:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:44:11.007 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:44:11 compute-0 sudo[241539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvxtpgqzacqydlpctvyfigzgkohkyyex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157850.6416566-890-158699616636576/AnsiballZ_file.py'
Oct 11 04:44:11 compute-0 sudo[241539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:11 compute-0 python3.9[241541]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:11 compute-0 sudo[241539]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:11 compute-0 ceph-mon[74243]: pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:12 compute-0 sudo[241691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wodbetxkntxuioelgwtolqiemibshflh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157851.8612752-902-249458108862989/AnsiballZ_file.py'
Oct 11 04:44:12 compute-0 sudo[241691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:12 compute-0 python3.9[241693]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 11 04:44:12 compute-0 sudo[241691]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:12 compute-0 sudo[241843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vogvtmonjxgzhzcvtscastkdjqcptgru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157852.6659389-910-176014544138396/AnsiballZ_modprobe.py'
Oct 11 04:44:12 compute-0 sudo[241843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:13 compute-0 python3.9[241845]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 11 04:44:13 compute-0 kernel: Key type psk registered
Oct 11 04:44:13 compute-0 sudo[241843]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:13 compute-0 ceph-mon[74243]: pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:13 compute-0 sudo[242007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhjulqwtzkbfsnafgegswynnunwboeiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157853.5122204-918-201327806798213/AnsiballZ_stat.py'
Oct 11 04:44:13 compute-0 sudo[242007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:14 compute-0 python3.9[242009]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:44:14 compute-0 sudo[242007]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:14 compute-0 sudo[242130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwpbwpktpdllsvvnzovcroqecnqkgeqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157853.5122204-918-201327806798213/AnsiballZ_copy.py'
Oct 11 04:44:14 compute-0 sudo[242130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:14 compute-0 python3.9[242132]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760157853.5122204-918-201327806798213/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:14 compute-0 sudo[242130]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:15 compute-0 podman[242216]: 2025-10-11 04:44:15.412091078 +0000 UTC m=+0.062609908 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:44:15 compute-0 sudo[242301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkbkjcdcmunxotxbimkrlznkoszwnnpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157855.1559646-934-145174939963536/AnsiballZ_lineinfile.py'
Oct 11 04:44:15 compute-0 sudo[242301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:15 compute-0 ceph-mon[74243]: pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:15 compute-0 python3.9[242303]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:15 compute-0 sudo[242301]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:16 compute-0 sudo[242453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ventxnrxentievwywppqrdtmhlgafxdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157856.0148335-942-27980902988035/AnsiballZ_systemd.py'
Oct 11 04:44:16 compute-0 sudo[242453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:16 compute-0 python3.9[242455]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:44:16 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 11 04:44:16 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 11 04:44:16 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 11 04:44:16 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 11 04:44:16 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 11 04:44:16 compute-0 sudo[242453]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:17 compute-0 sudo[242609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozwzrmjpcqhedinasfhwjsbptxohtqhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157857.1419833-950-34968964142611/AnsiballZ_setup.py'
Oct 11 04:44:17 compute-0 sudo[242609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:17 compute-0 ceph-mon[74243]: pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:17 compute-0 python3.9[242611]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:44:18 compute-0 sudo[242609]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:18 compute-0 ceph-mon[74243]: pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:18 compute-0 sudo[242693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkroqqexaanwgzjqafogztakulvxanxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157857.1419833-950-34968964142611/AnsiballZ_dnf.py'
Oct 11 04:44:18 compute-0 sudo[242693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:18 compute-0 python3.9[242695]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:44:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:21 compute-0 ceph-mon[74243]: pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:23 compute-0 ceph-mon[74243]: pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:25 compute-0 systemd[1]: Reloading.
Oct 11 04:44:25 compute-0 ceph-mon[74243]: pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:25 compute-0 systemd-sysv-generator[242728]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:44:25 compute-0 systemd-rc-local-generator[242722]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:44:25 compute-0 systemd[1]: Reloading.
Oct 11 04:44:25 compute-0 systemd-rc-local-generator[242758]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:44:25 compute-0 systemd-sysv-generator[242766]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:44:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:44:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:44:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:44:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:44:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:44:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:44:26 compute-0 systemd-logind[801]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 11 04:44:26 compute-0 systemd-logind[801]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 11 04:44:26 compute-0 lvm[242808]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 11 04:44:26 compute-0 lvm[242808]: VG ceph_vg1 finished
Oct 11 04:44:26 compute-0 lvm[242809]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 04:44:26 compute-0 lvm[242809]: VG ceph_vg2 finished
Oct 11 04:44:26 compute-0 lvm[242810]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 11 04:44:26 compute-0 lvm[242810]: VG ceph_vg0 finished
Oct 11 04:44:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:26 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 04:44:26 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 11 04:44:26 compute-0 systemd[1]: Reloading.
Oct 11 04:44:26 compute-0 systemd-rc-local-generator[242862]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:44:26 compute-0 systemd-sysv-generator[242866]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:44:26 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 04:44:27 compute-0 ceph-mon[74243]: pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:27 compute-0 sudo[242693]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:28 compute-0 sudo[244150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpukydqtwpsghextvukfagyzgxhoyzdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157867.784488-962-163626462907420/AnsiballZ_file.py'
Oct 11 04:44:28 compute-0 sudo[244150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:28 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 04:44:28 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 11 04:44:28 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.804s CPU time.
Oct 11 04:44:28 compute-0 systemd[1]: run-r92b36d42c0b1482b8f5e7ecf14733578.service: Deactivated successfully.
Oct 11 04:44:28 compute-0 python3.9[244152]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:28 compute-0 sudo[244150]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:29 compute-0 python3.9[244303]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:44:29 compute-0 ceph-mon[74243]: pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:30 compute-0 sudo[244457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkkvfagcthjgrnkrxjtdpgoudapqofmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157869.7151115-980-206795584347356/AnsiballZ_file.py'
Oct 11 04:44:30 compute-0 sudo[244457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:30 compute-0 python3.9[244459]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:30 compute-0 sudo[244457]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:31 compute-0 sudo[244609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzqixmvimtdksfrgfasjhgyaryxscuja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157870.8198032-991-192903231913092/AnsiballZ_systemd_service.py'
Oct 11 04:44:31 compute-0 sudo[244609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:31 compute-0 ceph-mon[74243]: pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:31 compute-0 python3.9[244611]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 04:44:31 compute-0 systemd[1]: Reloading.
Oct 11 04:44:32 compute-0 systemd-rc-local-generator[244637]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:44:32 compute-0 systemd-sysv-generator[244641]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:44:32 compute-0 sudo[244609]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:33 compute-0 python3.9[244795]: ansible-ansible.builtin.service_facts Invoked
Oct 11 04:44:33 compute-0 network[244812]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 04:44:33 compute-0 network[244813]: 'network-scripts' will be removed from distribution in near future.
Oct 11 04:44:33 compute-0 network[244814]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 04:44:33 compute-0 ceph-mon[74243]: pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:35 compute-0 ceph-mon[74243]: pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:37 compute-0 ceph-mon[74243]: pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:39 compute-0 ceph-mon[74243]: pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:39 compute-0 sudo[245111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hezqqiohyjafwtlkhvfhonplyhqkjnsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157879.4708908-1010-192931509540712/AnsiballZ_systemd_service.py'
Oct 11 04:44:39 compute-0 sudo[245111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:39 compute-0 podman[245065]: 2025-10-11 04:44:39.929211479 +0000 UTC m=+0.098900897 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, io.buildah.version=1.41.3)
Oct 11 04:44:39 compute-0 podman[245064]: 2025-10-11 04:44:39.974689401 +0000 UTC m=+0.145053616 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 11 04:44:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:40 compute-0 python3.9[245124]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:44:40 compute-0 sudo[245111]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:40 compute-0 sudo[245298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztlmwbcpyqrnmmhxkawfuzmawfzuewib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157880.3737454-1010-36861380269862/AnsiballZ_systemd_service.py'
Oct 11 04:44:40 compute-0 sudo[245298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:40 compute-0 podman[245260]: 2025-10-11 04:44:40.708451291 +0000 UTC m=+0.072821116 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 11 04:44:40 compute-0 python3.9[245309]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:44:41 compute-0 sudo[245298]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:41 compute-0 sudo[245460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkfzxdikxtvrsdrblfkjrvsdcfczspjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157881.128991-1010-20363788101382/AnsiballZ_systemd_service.py'
Oct 11 04:44:41 compute-0 sudo[245460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:41 compute-0 ceph-mon[74243]: pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:41 compute-0 python3.9[245462]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:44:41 compute-0 sudo[245460]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:42 compute-0 sudo[245613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnzhionhvkodaqznvnblkzumwuoskned ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157881.92952-1010-211701569736986/AnsiballZ_systemd_service.py'
Oct 11 04:44:42 compute-0 sudo[245613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:42 compute-0 python3.9[245615]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:44:42 compute-0 sudo[245613]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:42 compute-0 ceph-mon[74243]: pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:43 compute-0 sudo[245766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqqbxrpwthozhmkfceybvuoupqsvqwaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157882.7812088-1010-90545289390856/AnsiballZ_systemd_service.py'
Oct 11 04:44:43 compute-0 sudo[245766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:43 compute-0 python3.9[245768]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:44:43 compute-0 sudo[245766]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:43 compute-0 sudo[245919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edofvlgcfcbpkotcvtxomtuclwgvhetl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157883.5707793-1010-244069891868633/AnsiballZ_systemd_service.py'
Oct 11 04:44:43 compute-0 sudo[245919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:44 compute-0 python3.9[245921]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:44:44 compute-0 sudo[245919]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:44 compute-0 sudo[246072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxbrtvxkttxydtxtrvbknihfaeatufuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157884.4883487-1010-138895416429218/AnsiballZ_systemd_service.py'
Oct 11 04:44:44 compute-0 sudo[246072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:45 compute-0 python3.9[246074]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:44:45 compute-0 sudo[246072]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:45 compute-0 ceph-mon[74243]: pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:45 compute-0 sudo[246235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpekhgradrgxsamfdmjyjjbxdajfdmyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157885.4253573-1010-177964917928041/AnsiballZ_systemd_service.py'
Oct 11 04:44:45 compute-0 sudo[246235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:45 compute-0 podman[246199]: 2025-10-11 04:44:45.815291607 +0000 UTC m=+0.075515564 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:44:46 compute-0 python3.9[246239]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:44:46 compute-0 sudo[246235]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:46 compute-0 sudo[246398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmpswvicifybvklgllkjxdteurwyysrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157886.473296-1069-160933042595541/AnsiballZ_file.py'
Oct 11 04:44:46 compute-0 sudo[246398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:47 compute-0 python3.9[246400]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:47 compute-0 sudo[246398]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:47 compute-0 sudo[246550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzviwfrajybjedftkuxdinoyyeyizsan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157887.2017257-1069-55984857577403/AnsiballZ_file.py'
Oct 11 04:44:47 compute-0 sudo[246550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:47 compute-0 ceph-mon[74243]: pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:47 compute-0 python3.9[246552]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:47 compute-0 sudo[246550]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:48 compute-0 sudo[246702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqcqisbogntbairghrrvvvthbklkkqgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157887.9200802-1069-228302838894127/AnsiballZ_file.py'
Oct 11 04:44:48 compute-0 sudo[246702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:48 compute-0 python3.9[246704]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:48 compute-0 sudo[246702]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:48 compute-0 sudo[246854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovwbgzmsjybtienhtawacovbpoafeupu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157888.598109-1069-34285143832579/AnsiballZ_file.py'
Oct 11 04:44:48 compute-0 sudo[246854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:49 compute-0 python3.9[246856]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:49 compute-0 sudo[246854]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:49 compute-0 ceph-mon[74243]: pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:49 compute-0 sudo[247006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpdeeriznozrtklbvrknqwrwolxfvwxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157889.321814-1069-76566042552956/AnsiballZ_file.py'
Oct 11 04:44:49 compute-0 sudo[247006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:49 compute-0 python3.9[247008]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:49 compute-0 sudo[247006]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:50 compute-0 sudo[247158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgklmmtoqdadmrutalkszbptklxtardv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157889.9878201-1069-48653525927608/AnsiballZ_file.py'
Oct 11 04:44:50 compute-0 sudo[247158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:50 compute-0 python3.9[247160]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:50 compute-0 sudo[247158]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:51 compute-0 sudo[247310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccegllmzgbqhvuudfemyapxbzwcamriy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157890.6945126-1069-141629921435521/AnsiballZ_file.py'
Oct 11 04:44:51 compute-0 sudo[247310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:51 compute-0 python3.9[247312]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:51 compute-0 sudo[247310]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:51 compute-0 ceph-mon[74243]: pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:51 compute-0 sudo[247462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eibnrsvuejrnytsapvaabbcxyygkpwkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157891.3948398-1069-65963967193062/AnsiballZ_file.py'
Oct 11 04:44:51 compute-0 sudo[247462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:51 compute-0 python3.9[247464]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:51 compute-0 sudo[247462]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:52 compute-0 sudo[247614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emorhfwgbwmsvfbiyxwhlazdpqooypbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157892.1291595-1126-152800837836407/AnsiballZ_file.py'
Oct 11 04:44:52 compute-0 sudo[247614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:52 compute-0 python3.9[247616]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:52 compute-0 sudo[247614]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:53 compute-0 sudo[247766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztjqwyhjatijnadxrickcqfpfaugofcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157892.7428908-1126-225573766932300/AnsiballZ_file.py'
Oct 11 04:44:53 compute-0 sudo[247766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:53 compute-0 python3.9[247768]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:53 compute-0 sudo[247766]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:53 compute-0 ceph-mon[74243]: pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:53 compute-0 sudo[247918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oinlmzrihihsyfysdakfeekcdmmxvfjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157893.512547-1126-81030015904500/AnsiballZ_file.py'
Oct 11 04:44:53 compute-0 sudo[247918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:53 compute-0 python3.9[247920]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:54 compute-0 sudo[247918]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:54 compute-0 sudo[248070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wohvmutelciyazckzgffpqnuuychxxyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157894.1338022-1126-97443575337267/AnsiballZ_file.py'
Oct 11 04:44:54 compute-0 sudo[248070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:54 compute-0 python3.9[248072]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:54 compute-0 sudo[248070]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:55 compute-0 sudo[248222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouvvkctyxfwxfcuaofrzhogwnjbisaeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157894.8898373-1126-77135350797888/AnsiballZ_file.py'
Oct 11 04:44:55 compute-0 sudo[248222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:55 compute-0 python3.9[248224]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:55 compute-0 sudo[248222]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:55 compute-0 ceph-mon[74243]: pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:56 compute-0 sudo[248374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvxqoegrksteahppnyxzdnulgdyspnky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157895.6555617-1126-21624016024309/AnsiballZ_file.py'
Oct 11 04:44:56 compute-0 sudo[248374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:44:56
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'vms', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'backups', '.mgr', 'cephfs.cephfs.data']
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:44:56 compute-0 python3.9[248376]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:44:56 compute-0 sudo[248374]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:56 compute-0 sudo[248526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hawtoclkenmekwyrfyyavrbrhwvnnxge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157896.4313624-1126-177829559615513/AnsiballZ_file.py'
Oct 11 04:44:56 compute-0 sudo[248526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:57 compute-0 python3.9[248528]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:57 compute-0 sudo[248526]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:57 compute-0 sudo[248678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gddnnqafoielwynmajyrbockijedgfcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157897.2360015-1126-184344082428143/AnsiballZ_file.py'
Oct 11 04:44:57 compute-0 ceph-mon[74243]: pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:57 compute-0 sudo[248678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:57 compute-0 python3.9[248680]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:44:57 compute-0 sudo[248678]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:58 compute-0 sudo[248830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjlbfvfjpiogwnnavngwufkujzgfqglr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157898.179562-1184-75551650863836/AnsiballZ_command.py'
Oct 11 04:44:58 compute-0 sudo[248830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:44:58 compute-0 python3.9[248832]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:44:58 compute-0 sudo[248830]: pam_unix(sudo:session): session closed for user root
Oct 11 04:44:59 compute-0 ceph-mon[74243]: pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:59 compute-0 python3.9[248984]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 04:44:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:59 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct 11 04:44:59 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:44:59.990039) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:44:59 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct 11 04:44:59 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157899990128, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1358, "num_deletes": 506, "total_data_size": 1666083, "memory_usage": 1696672, "flush_reason": "Manual Compaction"}
Oct 11 04:44:59 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157900002512, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1650182, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13439, "largest_seqno": 14796, "table_properties": {"data_size": 1644183, "index_size": 2818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 14848, "raw_average_key_size": 17, "raw_value_size": 1630314, "raw_average_value_size": 1973, "num_data_blocks": 129, "num_entries": 826, "num_filter_entries": 826, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760157785, "oldest_key_time": 1760157785, "file_creation_time": 1760157899, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 12541 microseconds, and 8293 cpu microseconds.
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:45:00.002589) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1650182 bytes OK
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:45:00.002615) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:45:00.004572) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:45:00.004593) EVENT_LOG_v1 {"time_micros": 1760157900004586, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:45:00.004617) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1658975, prev total WAL file size 1658975, number of live WAL files 2.
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:45:00.005603) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1611KB)], [32(7372KB)]
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157900005660, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9199356, "oldest_snapshot_seqno": -1}
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3769 keys, 7225071 bytes, temperature: kUnknown
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157900064114, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7225071, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7197989, "index_size": 16540, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 92353, "raw_average_key_size": 24, "raw_value_size": 7127846, "raw_average_value_size": 1891, "num_data_blocks": 702, "num_entries": 3769, "num_filter_entries": 3769, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156706, "oldest_key_time": 0, "file_creation_time": 1760157900, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:45:00.064673) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7225071 bytes
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:45:00.066694) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.0 rd, 123.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.2 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(10.0) write-amplify(4.4) OK, records in: 4794, records dropped: 1025 output_compression: NoCompression
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:45:00.066729) EVENT_LOG_v1 {"time_micros": 1760157900066711, "job": 14, "event": "compaction_finished", "compaction_time_micros": 58610, "compaction_time_cpu_micros": 32092, "output_level": 6, "num_output_files": 1, "total_output_size": 7225071, "num_input_records": 4794, "num_output_records": 3769, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157900067836, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760157900071599, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:45:00.005486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:45:00.071751) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:45:00.071760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:45:00.071762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:45:00.071766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:45:00 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:45:00.071769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:45:00 compute-0 sudo[249134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzxcdinhvidfhfotqcudgyuatgmhowzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157900.0515378-1202-135237441977373/AnsiballZ_systemd_service.py'
Oct 11 04:45:00 compute-0 sudo[249134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:00 compute-0 unix_chkpwd[249137]: password check failed for user (root)
Oct 11 04:45:00 compute-0 sshd-session[234783]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.159.21.170  user=root
Oct 11 04:45:00 compute-0 python3.9[249136]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 04:45:00 compute-0 systemd[1]: Reloading.
Oct 11 04:45:00 compute-0 systemd-rc-local-generator[249165]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:45:00 compute-0 systemd-sysv-generator[249168]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:45:01 compute-0 ceph-mon[74243]: pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:01 compute-0 sudo[249134]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:01 compute-0 sudo[249322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efzfzngrqdcdcvsywfbdgxcugkzejqwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157901.4104638-1210-121845669349991/AnsiballZ_command.py'
Oct 11 04:45:01 compute-0 sudo[249322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:01 compute-0 python3.9[249324]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:45:01 compute-0 sudo[249322]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:02 compute-0 sudo[249475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-refpnmyundrcpxheabugclhssfxrqiio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157902.162359-1210-60717805253284/AnsiballZ_command.py'
Oct 11 04:45:02 compute-0 sudo[249475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:02 compute-0 python3.9[249477]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:45:02 compute-0 sshd-session[234783]: Failed password for root from 221.159.21.170 port 56456 ssh2
Oct 11 04:45:02 compute-0 sudo[249475]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:03 compute-0 sudo[249628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbrajrgmnkumatbhlmlcgckplojlaecu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157902.903527-1210-150569467366604/AnsiballZ_command.py'
Oct 11 04:45:03 compute-0 sudo[249628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:03 compute-0 python3.9[249630]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:45:03 compute-0 sudo[249628]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:03 compute-0 ceph-mon[74243]: pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:03 compute-0 sudo[249755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:45:03 compute-0 sudo[249755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:45:03 compute-0 sudo[249755]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:03 compute-0 sudo[249806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrgsepfattraetjldmejvkzpapxrvgox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157903.4866016-1210-30016336209039/AnsiballZ_command.py'
Oct 11 04:45:03 compute-0 sudo[249806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:03 compute-0 sudo[249807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:45:03 compute-0 sudo[249807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:45:03 compute-0 sudo[249807]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:03 compute-0 sudo[249834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:45:03 compute-0 sudo[249834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:45:03 compute-0 sudo[249834]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:03 compute-0 sudo[249859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:45:03 compute-0 sudo[249859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:45:03 compute-0 python3.9[249815]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:45:03 compute-0 sudo[249806]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:04 compute-0 sudo[250055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phdynjqfmbvsscqwskbjyayugonswfxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157904.079447-1210-258994505367370/AnsiballZ_command.py'
Oct 11 04:45:04 compute-0 sudo[250055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:04 compute-0 sudo[249859]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:45:04 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:45:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:45:04 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:45:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:45:04 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:45:04 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 7444f506-43e4-49df-a298-15c0f1423c9c does not exist
Oct 11 04:45:04 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 11859043-0f4e-44db-b6b1-4ba7fdb0a4a7 does not exist
Oct 11 04:45:04 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev e60bf634-d50b-4d7d-8369-c1cebdff6dba does not exist
Oct 11 04:45:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:45:04 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:45:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:45:04 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:45:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:45:04 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:45:04 compute-0 sudo[250068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:45:04 compute-0 sudo[250068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:45:04 compute-0 sudo[250068]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:04 compute-0 python3.9[250064]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:45:04 compute-0 sudo[250093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:45:04 compute-0 sudo[250093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:45:04 compute-0 sudo[250093]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:04 compute-0 sudo[250055]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:04 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:45:04 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:45:04 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:45:04 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:45:04 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:45:04 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:45:04 compute-0 sudo[250119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:45:04 compute-0 sudo[250119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:45:04 compute-0 sudo[250119]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:04 compute-0 sudo[250156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:45:04 compute-0 sudo[250156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:45:04 compute-0 sudo[250372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plaispzqdmymmjqvdhgbkqsphblbjmqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157904.6718905-1210-73026907372778/AnsiballZ_command.py'
Oct 11 04:45:04 compute-0 sudo[250372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:04 compute-0 podman[250344]: 2025-10-11 04:45:04.97723762 +0000 UTC m=+0.040176539 container create bdd26addb58901f13084e15f2108de53c4e521333d5e1f6bac3f9ed8504fbf45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_chatterjee, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:45:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:05 compute-0 sshd-session[234783]: Connection closed by authenticating user root 221.159.21.170 port 56456 [preauth]
Oct 11 04:45:05 compute-0 systemd[1]: Started libpod-conmon-bdd26addb58901f13084e15f2108de53c4e521333d5e1f6bac3f9ed8504fbf45.scope.
Oct 11 04:45:05 compute-0 podman[250344]: 2025-10-11 04:45:04.958615398 +0000 UTC m=+0.021554327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:45:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:45:05 compute-0 podman[250344]: 2025-10-11 04:45:05.091009402 +0000 UTC m=+0.153948341 container init bdd26addb58901f13084e15f2108de53c4e521333d5e1f6bac3f9ed8504fbf45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_chatterjee, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:45:05 compute-0 podman[250344]: 2025-10-11 04:45:05.098573484 +0000 UTC m=+0.161512373 container start bdd26addb58901f13084e15f2108de53c4e521333d5e1f6bac3f9ed8504fbf45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_chatterjee, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:45:05 compute-0 podman[250344]: 2025-10-11 04:45:05.101825106 +0000 UTC m=+0.164763995 container attach bdd26addb58901f13084e15f2108de53c4e521333d5e1f6bac3f9ed8504fbf45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:45:05 compute-0 eager_chatterjee[250378]: 167 167
Oct 11 04:45:05 compute-0 systemd[1]: libpod-bdd26addb58901f13084e15f2108de53c4e521333d5e1f6bac3f9ed8504fbf45.scope: Deactivated successfully.
Oct 11 04:45:05 compute-0 podman[250344]: 2025-10-11 04:45:05.106767152 +0000 UTC m=+0.169706061 container died bdd26addb58901f13084e15f2108de53c4e521333d5e1f6bac3f9ed8504fbf45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_chatterjee, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:45:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbaccab673dd605b652957974143bffe167db0f57df1061427ca69ed31f467aa-merged.mount: Deactivated successfully.
Oct 11 04:45:05 compute-0 podman[250344]: 2025-10-11 04:45:05.163257663 +0000 UTC m=+0.226196542 container remove bdd26addb58901f13084e15f2108de53c4e521333d5e1f6bac3f9ed8504fbf45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_chatterjee, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:45:05 compute-0 systemd[1]: libpod-conmon-bdd26addb58901f13084e15f2108de53c4e521333d5e1f6bac3f9ed8504fbf45.scope: Deactivated successfully.
Oct 11 04:45:05 compute-0 python3.9[250375]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:45:05 compute-0 sudo[250372]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:05 compute-0 podman[250428]: 2025-10-11 04:45:05.358310764 +0000 UTC m=+0.045292538 container create 6dd0b5bbccb7ee4532a6469aca7e943fdf98827163d74e8764df7906040c48f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_merkle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:45:05 compute-0 systemd[1]: Started libpod-conmon-6dd0b5bbccb7ee4532a6469aca7e943fdf98827163d74e8764df7906040c48f2.scope.
Oct 11 04:45:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f54946c133fe494e253a75999e774a66eed4d6fab5d1ec290f152437d6c4bec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:45:05 compute-0 podman[250428]: 2025-10-11 04:45:05.33799768 +0000 UTC m=+0.024979464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f54946c133fe494e253a75999e774a66eed4d6fab5d1ec290f152437d6c4bec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f54946c133fe494e253a75999e774a66eed4d6fab5d1ec290f152437d6c4bec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f54946c133fe494e253a75999e774a66eed4d6fab5d1ec290f152437d6c4bec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f54946c133fe494e253a75999e774a66eed4d6fab5d1ec290f152437d6c4bec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:45:05 compute-0 podman[250428]: 2025-10-11 04:45:05.452435119 +0000 UTC m=+0.139416923 container init 6dd0b5bbccb7ee4532a6469aca7e943fdf98827163d74e8764df7906040c48f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_merkle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:45:05 compute-0 podman[250428]: 2025-10-11 04:45:05.463880389 +0000 UTC m=+0.150862193 container start 6dd0b5bbccb7ee4532a6469aca7e943fdf98827163d74e8764df7906040c48f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:45:05 compute-0 podman[250428]: 2025-10-11 04:45:05.467575292 +0000 UTC m=+0.154557066 container attach 6dd0b5bbccb7ee4532a6469aca7e943fdf98827163d74e8764df7906040c48f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:45:05 compute-0 ceph-mon[74243]: pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:05 compute-0 sudo[250574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozwmskhnscnmrzsumdfqipyevzjqbtrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157905.3786767-1210-271815318028410/AnsiballZ_command.py'
Oct 11 04:45:05 compute-0 sudo[250574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:05 compute-0 python3.9[250576]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:45:05 compute-0 sudo[250574]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:45:06 compute-0 sudo[250743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slhplsfbwvshumquxtndzqklhkuuppos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157906.066176-1210-132527522209799/AnsiballZ_command.py'
Oct 11 04:45:06 compute-0 sudo[250743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:06 compute-0 admiring_merkle[250466]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:45:06 compute-0 admiring_merkle[250466]: --> relative data size: 1.0
Oct 11 04:45:06 compute-0 admiring_merkle[250466]: --> All data devices are unavailable
Oct 11 04:45:06 compute-0 systemd[1]: libpod-6dd0b5bbccb7ee4532a6469aca7e943fdf98827163d74e8764df7906040c48f2.scope: Deactivated successfully.
Oct 11 04:45:06 compute-0 systemd[1]: libpod-6dd0b5bbccb7ee4532a6469aca7e943fdf98827163d74e8764df7906040c48f2.scope: Consumed 1.065s CPU time.
Oct 11 04:45:06 compute-0 podman[250428]: 2025-10-11 04:45:06.602454633 +0000 UTC m=+1.289436437 container died 6dd0b5bbccb7ee4532a6469aca7e943fdf98827163d74e8764df7906040c48f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_merkle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:45:06 compute-0 python3.9[250745]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:45:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f54946c133fe494e253a75999e774a66eed4d6fab5d1ec290f152437d6c4bec-merged.mount: Deactivated successfully.
Oct 11 04:45:06 compute-0 podman[250428]: 2025-10-11 04:45:06.667757357 +0000 UTC m=+1.354739121 container remove 6dd0b5bbccb7ee4532a6469aca7e943fdf98827163d74e8764df7906040c48f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_merkle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:45:06 compute-0 sudo[250743]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:06 compute-0 systemd[1]: libpod-conmon-6dd0b5bbccb7ee4532a6469aca7e943fdf98827163d74e8764df7906040c48f2.scope: Deactivated successfully.
Oct 11 04:45:06 compute-0 sudo[250156]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:06 compute-0 sudo[250787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:45:06 compute-0 sudo[250787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:45:06 compute-0 sudo[250787]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:06 compute-0 sudo[250818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:45:06 compute-0 sudo[250818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:45:06 compute-0 sudo[250818]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:06 compute-0 sudo[250843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:45:06 compute-0 sudo[250843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:45:06 compute-0 sudo[250843]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:07 compute-0 sudo[250868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:45:07 compute-0 sudo[250868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:45:07 compute-0 podman[250933]: 2025-10-11 04:45:07.431981459 +0000 UTC m=+0.050068570 container create 9bedfe705df6ecd8b26cc491919f543e1f6b9226943c917f5a4a0e2a43e5b646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:45:07 compute-0 systemd[1]: Started libpod-conmon-9bedfe705df6ecd8b26cc491919f543e1f6b9226943c917f5a4a0e2a43e5b646.scope.
Oct 11 04:45:07 compute-0 podman[250933]: 2025-10-11 04:45:07.405403385 +0000 UTC m=+0.023490546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:45:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:45:07 compute-0 podman[250933]: 2025-10-11 04:45:07.546324695 +0000 UTC m=+0.164411846 container init 9bedfe705df6ecd8b26cc491919f543e1f6b9226943c917f5a4a0e2a43e5b646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:45:07 compute-0 podman[250933]: 2025-10-11 04:45:07.557061707 +0000 UTC m=+0.175148808 container start 9bedfe705df6ecd8b26cc491919f543e1f6b9226943c917f5a4a0e2a43e5b646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:45:07 compute-0 upbeat_carver[250950]: 167 167
Oct 11 04:45:07 compute-0 podman[250933]: 2025-10-11 04:45:07.561690735 +0000 UTC m=+0.179777896 container attach 9bedfe705df6ecd8b26cc491919f543e1f6b9226943c917f5a4a0e2a43e5b646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:45:07 compute-0 systemd[1]: libpod-9bedfe705df6ecd8b26cc491919f543e1f6b9226943c917f5a4a0e2a43e5b646.scope: Deactivated successfully.
Oct 11 04:45:07 compute-0 podman[250933]: 2025-10-11 04:45:07.563234994 +0000 UTC m=+0.181322125 container died 9bedfe705df6ecd8b26cc491919f543e1f6b9226943c917f5a4a0e2a43e5b646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:45:07 compute-0 ceph-mon[74243]: pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c08a072c71395ae7838e0d0e5430514db03ebc776b907d9c2479044d490b1ed9-merged.mount: Deactivated successfully.
Oct 11 04:45:07 compute-0 podman[250933]: 2025-10-11 04:45:07.613934218 +0000 UTC m=+0.232021319 container remove 9bedfe705df6ecd8b26cc491919f543e1f6b9226943c917f5a4a0e2a43e5b646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_carver, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:45:07 compute-0 systemd[1]: libpod-conmon-9bedfe705df6ecd8b26cc491919f543e1f6b9226943c917f5a4a0e2a43e5b646.scope: Deactivated successfully.
Oct 11 04:45:07 compute-0 podman[251047]: 2025-10-11 04:45:07.782307964 +0000 UTC m=+0.036883176 container create 601008f10c1f7b4336ecb04deb0c0c4f2fc3610f877917e63c80195ebd24da48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banach, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:45:07 compute-0 systemd[1]: Started libpod-conmon-601008f10c1f7b4336ecb04deb0c0c4f2fc3610f877917e63c80195ebd24da48.scope.
Oct 11 04:45:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:45:07 compute-0 podman[251047]: 2025-10-11 04:45:07.766447022 +0000 UTC m=+0.021022254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b08e0b411e3cb68fd8b5e33f463530c9e00c5c6bd43e5f058c8624485c9caa5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b08e0b411e3cb68fd8b5e33f463530c9e00c5c6bd43e5f058c8624485c9caa5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b08e0b411e3cb68fd8b5e33f463530c9e00c5c6bd43e5f058c8624485c9caa5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b08e0b411e3cb68fd8b5e33f463530c9e00c5c6bd43e5f058c8624485c9caa5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:45:07 compute-0 podman[251047]: 2025-10-11 04:45:07.883685942 +0000 UTC m=+0.138261224 container init 601008f10c1f7b4336ecb04deb0c0c4f2fc3610f877917e63c80195ebd24da48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banach, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:45:07 compute-0 podman[251047]: 2025-10-11 04:45:07.89542289 +0000 UTC m=+0.149998142 container start 601008f10c1f7b4336ecb04deb0c0c4f2fc3610f877917e63c80195ebd24da48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banach, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:45:07 compute-0 podman[251047]: 2025-10-11 04:45:07.899740719 +0000 UTC m=+0.154315941 container attach 601008f10c1f7b4336ecb04deb0c0c4f2fc3610f877917e63c80195ebd24da48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banach, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:45:07 compute-0 sudo[251118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oniajdaudhglchfieuasezthtdpbgunc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157907.5416205-1289-165619913497658/AnsiballZ_file.py'
Oct 11 04:45:07 compute-0 sudo[251118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:08 compute-0 python3.9[251121]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:45:08 compute-0 sudo[251118]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:08 compute-0 silly_banach[251088]: {
Oct 11 04:45:08 compute-0 silly_banach[251088]:     "0": [
Oct 11 04:45:08 compute-0 silly_banach[251088]:         {
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "devices": [
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "/dev/loop3"
Oct 11 04:45:08 compute-0 silly_banach[251088]:             ],
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "lv_name": "ceph_lv0",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "lv_size": "21470642176",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "name": "ceph_lv0",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "tags": {
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.cluster_name": "ceph",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.crush_device_class": "",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.encrypted": "0",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.osd_id": "0",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.type": "block",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.vdo": "0"
Oct 11 04:45:08 compute-0 silly_banach[251088]:             },
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "type": "block",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "vg_name": "ceph_vg0"
Oct 11 04:45:08 compute-0 silly_banach[251088]:         }
Oct 11 04:45:08 compute-0 silly_banach[251088]:     ],
Oct 11 04:45:08 compute-0 silly_banach[251088]:     "1": [
Oct 11 04:45:08 compute-0 silly_banach[251088]:         {
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "devices": [
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "/dev/loop4"
Oct 11 04:45:08 compute-0 silly_banach[251088]:             ],
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "lv_name": "ceph_lv1",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "lv_size": "21470642176",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "name": "ceph_lv1",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "tags": {
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.cluster_name": "ceph",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.crush_device_class": "",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.encrypted": "0",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.osd_id": "1",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.type": "block",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.vdo": "0"
Oct 11 04:45:08 compute-0 silly_banach[251088]:             },
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "type": "block",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "vg_name": "ceph_vg1"
Oct 11 04:45:08 compute-0 silly_banach[251088]:         }
Oct 11 04:45:08 compute-0 silly_banach[251088]:     ],
Oct 11 04:45:08 compute-0 silly_banach[251088]:     "2": [
Oct 11 04:45:08 compute-0 silly_banach[251088]:         {
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "devices": [
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "/dev/loop5"
Oct 11 04:45:08 compute-0 silly_banach[251088]:             ],
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "lv_name": "ceph_lv2",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "lv_size": "21470642176",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "name": "ceph_lv2",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "tags": {
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.cluster_name": "ceph",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.crush_device_class": "",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.encrypted": "0",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.osd_id": "2",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.type": "block",
Oct 11 04:45:08 compute-0 silly_banach[251088]:                 "ceph.vdo": "0"
Oct 11 04:45:08 compute-0 silly_banach[251088]:             },
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "type": "block",
Oct 11 04:45:08 compute-0 silly_banach[251088]:             "vg_name": "ceph_vg2"
Oct 11 04:45:08 compute-0 silly_banach[251088]:         }
Oct 11 04:45:08 compute-0 silly_banach[251088]:     ]
Oct 11 04:45:08 compute-0 silly_banach[251088]: }
Oct 11 04:45:08 compute-0 systemd[1]: libpod-601008f10c1f7b4336ecb04deb0c0c4f2fc3610f877917e63c80195ebd24da48.scope: Deactivated successfully.
Oct 11 04:45:08 compute-0 podman[251047]: 2025-10-11 04:45:08.706238871 +0000 UTC m=+0.960814113 container died 601008f10c1f7b4336ecb04deb0c0c4f2fc3610f877917e63c80195ebd24da48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banach, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:45:08 compute-0 sudo[251275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxhsgefesdoatktotocssnxkxaogbayz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157908.327225-1289-109456972504557/AnsiballZ_file.py'
Oct 11 04:45:08 compute-0 sudo[251275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b08e0b411e3cb68fd8b5e33f463530c9e00c5c6bd43e5f058c8624485c9caa5-merged.mount: Deactivated successfully.
Oct 11 04:45:08 compute-0 podman[251047]: 2025-10-11 04:45:08.790076255 +0000 UTC m=+1.044651507 container remove 601008f10c1f7b4336ecb04deb0c0c4f2fc3610f877917e63c80195ebd24da48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:45:08 compute-0 systemd[1]: libpod-conmon-601008f10c1f7b4336ecb04deb0c0c4f2fc3610f877917e63c80195ebd24da48.scope: Deactivated successfully.
Oct 11 04:45:08 compute-0 sudo[250868]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:08 compute-0 sudo[251289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:45:08 compute-0 sudo[251289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:45:08 compute-0 sudo[251289]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:08 compute-0 python3.9[251284]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:45:08 compute-0 sudo[251275]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:08 compute-0 sudo[251314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:45:08 compute-0 sudo[251314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:45:08 compute-0 sudo[251314]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:09 compute-0 sudo[251339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:45:09 compute-0 sudo[251339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:45:09 compute-0 sudo[251339]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:09 compute-0 sudo[251388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:45:09 compute-0 sudo[251388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:45:09 compute-0 sudo[251585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-audtenqrlubaojzrmzufwmhxvpxfoqsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157909.1396637-1289-162687778539946/AnsiballZ_file.py'
Oct 11 04:45:09 compute-0 sudo[251585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:09 compute-0 podman[251573]: 2025-10-11 04:45:09.507586353 +0000 UTC m=+0.050949412 container create 4126c363a1af6534b9ce70582f5346da9868e11f6712afad964fbdb7717789c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:45:09 compute-0 systemd[1]: Started libpod-conmon-4126c363a1af6534b9ce70582f5346da9868e11f6712afad964fbdb7717789c9.scope.
Oct 11 04:45:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:45:09 compute-0 podman[251573]: 2025-10-11 04:45:09.484791885 +0000 UTC m=+0.028154924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:45:09 compute-0 ceph-mon[74243]: pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:09 compute-0 podman[251573]: 2025-10-11 04:45:09.589668992 +0000 UTC m=+0.133032071 container init 4126c363a1af6534b9ce70582f5346da9868e11f6712afad964fbdb7717789c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 11 04:45:09 compute-0 podman[251573]: 2025-10-11 04:45:09.595384747 +0000 UTC m=+0.138747806 container start 4126c363a1af6534b9ce70582f5346da9868e11f6712afad964fbdb7717789c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goodall, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:45:09 compute-0 hungry_goodall[251597]: 167 167
Oct 11 04:45:09 compute-0 podman[251573]: 2025-10-11 04:45:09.599564233 +0000 UTC m=+0.142927282 container attach 4126c363a1af6534b9ce70582f5346da9868e11f6712afad964fbdb7717789c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:45:09 compute-0 systemd[1]: libpod-4126c363a1af6534b9ce70582f5346da9868e11f6712afad964fbdb7717789c9.scope: Deactivated successfully.
Oct 11 04:45:09 compute-0 podman[251602]: 2025-10-11 04:45:09.657649275 +0000 UTC m=+0.035664525 container died 4126c363a1af6534b9ce70582f5346da9868e11f6712afad964fbdb7717789c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goodall, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:45:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-848044e2f355137d627e10fe2d35b00734eaf0ce8fcf7c122427458d3d7dfddd-merged.mount: Deactivated successfully.
Oct 11 04:45:09 compute-0 podman[251602]: 2025-10-11 04:45:09.699891155 +0000 UTC m=+0.077906385 container remove 4126c363a1af6534b9ce70582f5346da9868e11f6712afad964fbdb7717789c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:45:09 compute-0 systemd[1]: libpod-conmon-4126c363a1af6534b9ce70582f5346da9868e11f6712afad964fbdb7717789c9.scope: Deactivated successfully.
Oct 11 04:45:09 compute-0 python3.9[251594]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:45:09 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:45:09 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3295 writes, 14K keys, 3295 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3295 writes, 3295 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1280 writes, 5809 keys, 1280 commit groups, 1.0 writes per commit group, ingest: 8.48 MB, 0.01 MB/s
                                           Interval WAL: 1280 writes, 1280 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    129.6      0.12              0.06         7    0.017       0      0       0.0       0.0
                                             L6      1/0    6.89 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    185.1    152.1      0.27              0.14         6    0.045     24K   3201       0.0       0.0
                                            Sum      1/0    6.89 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.6    127.5    145.1      0.39              0.19        13    0.030     24K   3201       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7    145.0    146.5      0.24              0.11         8    0.030     17K   2472       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    185.1    152.1      0.27              0.14         6    0.045     24K   3201       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    133.9      0.12              0.06         6    0.020       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.015, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.4 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563d484a31f0#2 capacity: 308.00 MB usage: 1.51 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(99,1.29 MB,0.419726%) FilterBlock(14,75.55 KB,0.0239533%) IndexBlock(14,149.28 KB,0.047332%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 04:45:09 compute-0 sudo[251585]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:09 compute-0 podman[251648]: 2025-10-11 04:45:09.931981145 +0000 UTC m=+0.071387410 container create 03c060d0467d7ec4beae665554bd66a9da9b63bb98b0a298f44819909faa79e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_nobel, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:45:09 compute-0 systemd[1]: Started libpod-conmon-03c060d0467d7ec4beae665554bd66a9da9b63bb98b0a298f44819909faa79e9.scope.
Oct 11 04:45:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:45:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0e7d1cba7a8b81943307c077234b0b020401c59c50ed484c3f40800d748e94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:45:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0e7d1cba7a8b81943307c077234b0b020401c59c50ed484c3f40800d748e94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:45:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0e7d1cba7a8b81943307c077234b0b020401c59c50ed484c3f40800d748e94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0e7d1cba7a8b81943307c077234b0b020401c59c50ed484c3f40800d748e94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:45:10 compute-0 podman[251648]: 2025-10-11 04:45:09.90019703 +0000 UTC m=+0.039603345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:45:10 compute-0 podman[251648]: 2025-10-11 04:45:10.011769896 +0000 UTC m=+0.151176141 container init 03c060d0467d7ec4beae665554bd66a9da9b63bb98b0a298f44819909faa79e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_nobel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:45:10 compute-0 podman[251648]: 2025-10-11 04:45:10.027415943 +0000 UTC m=+0.166822178 container start 03c060d0467d7ec4beae665554bd66a9da9b63bb98b0a298f44819909faa79e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_nobel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:45:10 compute-0 podman[251648]: 2025-10-11 04:45:10.033285661 +0000 UTC m=+0.172691906 container attach 03c060d0467d7ec4beae665554bd66a9da9b63bb98b0a298f44819909faa79e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:45:10 compute-0 podman[251685]: 2025-10-11 04:45:10.094062301 +0000 UTC m=+0.088389820 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid)
Oct 11 04:45:10 compute-0 podman[251691]: 2025-10-11 04:45:10.132285588 +0000 UTC m=+0.103645155 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 04:45:10 compute-0 sudo[251840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfnpdnrqzwokgjbsbbluvvmhtsgphjvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157910.0050604-1311-44219488309103/AnsiballZ_file.py'
Oct 11 04:45:10 compute-0 sudo[251840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:10 compute-0 python3.9[251842]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:45:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:10 compute-0 sudo[251840]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:45:11.007 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:45:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:45:11.008 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:45:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:45:11.008 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:45:11 compute-0 podman[251986]: 2025-10-11 04:45:11.017460994 +0000 UTC m=+0.068834435 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 11 04:45:11 compute-0 sudo[252039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jellwtqkyzdkjtsracatxcnfwxcbskck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157910.6411786-1311-171323656634495/AnsiballZ_file.py'
Oct 11 04:45:11 compute-0 sudo[252039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]: {
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:         "osd_id": 1,
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:         "type": "bluestore"
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:     },
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:         "osd_id": 0,
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:         "type": "bluestore"
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:     },
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:         "osd_id": 2,
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:         "type": "bluestore"
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]:     }
Oct 11 04:45:11 compute-0 suspicious_nobel[251682]: }
Oct 11 04:45:11 compute-0 systemd[1]: libpod-03c060d0467d7ec4beae665554bd66a9da9b63bb98b0a298f44819909faa79e9.scope: Deactivated successfully.
Oct 11 04:45:11 compute-0 systemd[1]: libpod-03c060d0467d7ec4beae665554bd66a9da9b63bb98b0a298f44819909faa79e9.scope: Consumed 1.048s CPU time.
Oct 11 04:45:11 compute-0 podman[251648]: 2025-10-11 04:45:11.091662764 +0000 UTC m=+1.231068999 container died 03c060d0467d7ec4beae665554bd66a9da9b63bb98b0a298f44819909faa79e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_nobel, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:45:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae0e7d1cba7a8b81943307c077234b0b020401c59c50ed484c3f40800d748e94-merged.mount: Deactivated successfully.
Oct 11 04:45:11 compute-0 podman[251648]: 2025-10-11 04:45:11.150045673 +0000 UTC m=+1.289451908 container remove 03c060d0467d7ec4beae665554bd66a9da9b63bb98b0a298f44819909faa79e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_nobel, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:45:11 compute-0 systemd[1]: libpod-conmon-03c060d0467d7ec4beae665554bd66a9da9b63bb98b0a298f44819909faa79e9.scope: Deactivated successfully.
Oct 11 04:45:11 compute-0 sudo[251388]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:45:11 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:45:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:45:11 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:45:11 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 65a1a779-3039-48ff-ac6a-7fbb1179c466 does not exist
Oct 11 04:45:11 compute-0 python3.9[252041]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:45:11 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 09906e71-a9b6-402c-ac61-78f8b02bd71b does not exist
Oct 11 04:45:11 compute-0 sudo[252039]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:11 compute-0 sudo[252055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:45:11 compute-0 sudo[252055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:45:11 compute-0 sudo[252055]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:11 compute-0 sudo[252086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:45:11 compute-0 sudo[252086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:45:11 compute-0 sudo[252086]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:11 compute-0 ceph-mon[74243]: pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:11 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:45:11 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:45:11 compute-0 sudo[252254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmjforewmonbfyyyufwyrodnbdirxsdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157911.444921-1311-261626572176552/AnsiballZ_file.py'
Oct 11 04:45:11 compute-0 sudo[252254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:12 compute-0 python3.9[252256]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:45:12 compute-0 sudo[252254]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:12 compute-0 sudo[252406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzptopgxdnxybcmtjxanxbmrkpijkscx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157912.2209423-1311-214985319367543/AnsiballZ_file.py'
Oct 11 04:45:12 compute-0 sudo[252406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:12 compute-0 python3.9[252408]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:45:12 compute-0 sudo[252406]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:13 compute-0 sudo[252558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yimbbmticxyltgjbpgtwhfmfcltfiwuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157912.8971817-1311-46519164965667/AnsiballZ_file.py'
Oct 11 04:45:13 compute-0 sudo[252558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:13 compute-0 python3.9[252560]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:45:13 compute-0 sudo[252558]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:13 compute-0 ceph-mon[74243]: pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:14 compute-0 sudo[252710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vupnjmoopjxkngnenbrjdkbgfnzrmkyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157913.6228905-1311-184807898543093/AnsiballZ_file.py'
Oct 11 04:45:14 compute-0 sudo[252710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:14 compute-0 python3.9[252712]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:45:14 compute-0 sudo[252710]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:14 compute-0 sudo[252862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlncudqijclwpwywkvdmyphicsznsomz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157914.4394782-1311-153617009078028/AnsiballZ_file.py'
Oct 11 04:45:14 compute-0 sudo[252862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:15 compute-0 python3.9[252864]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:45:15 compute-0 sudo[252862]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:15 compute-0 ceph-mon[74243]: pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:15 compute-0 sudo[253014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkathndtvjbfdcjzctpldneenwsiaegs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157915.2810874-1311-28599383115479/AnsiballZ_file.py'
Oct 11 04:45:15 compute-0 sudo[253014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:15 compute-0 python3.9[253016]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:45:15 compute-0 sudo[253014]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:16 compute-0 podman[253017]: 2025-10-11 04:45:16.007832022 +0000 UTC m=+0.072833647 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 04:45:16 compute-0 sudo[253184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krwywiplbxvjijkdlxlarmydohryxklc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157916.0865102-1311-39883697975469/AnsiballZ_file.py'
Oct 11 04:45:16 compute-0 sudo[253184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:16 compute-0 python3.9[253186]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:45:16 compute-0 sudo[253184]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:17 compute-0 ceph-mon[74243]: pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:19 compute-0 ceph-mon[74243]: pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:21 compute-0 ceph-mon[74243]: pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:22 compute-0 sudo[253336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxhljzdhlwjnxwzxqfwspelteyoyurcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157921.8103726-1514-280227386157119/AnsiballZ_getent.py'
Oct 11 04:45:22 compute-0 sudo[253336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:22 compute-0 python3.9[253338]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 11 04:45:22 compute-0 sudo[253336]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:23 compute-0 sudo[253489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnzqaxvziynfexspuqgtfonsopwjzhei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157922.6602015-1522-129507717628009/AnsiballZ_group.py'
Oct 11 04:45:23 compute-0 sudo[253489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:23 compute-0 python3.9[253491]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 11 04:45:23 compute-0 groupadd[253492]: group added to /etc/group: name=nova, GID=42436
Oct 11 04:45:23 compute-0 groupadd[253492]: group added to /etc/gshadow: name=nova
Oct 11 04:45:23 compute-0 groupadd[253492]: new group: name=nova, GID=42436
Oct 11 04:45:23 compute-0 sudo[253489]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:23 compute-0 ceph-mon[74243]: pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:24 compute-0 sudo[253647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvcgwsvpuimguknlxusqhmvbezndfssa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157923.7299027-1530-222859566045811/AnsiballZ_user.py'
Oct 11 04:45:24 compute-0 sudo[253647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:24 compute-0 python3.9[253649]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 11 04:45:24 compute-0 useradd[253651]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Oct 11 04:45:24 compute-0 useradd[253651]: add 'nova' to group 'libvirt'
Oct 11 04:45:24 compute-0 useradd[253651]: add 'nova' to shadow group 'libvirt'
Oct 11 04:45:24 compute-0 ceph-mon[74243]: pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:24 compute-0 sudo[253647]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:25 compute-0 sshd-session[253682]: Accepted publickey for zuul from 192.168.122.30 port 59588 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:45:25 compute-0 systemd-logind[801]: New session 53 of user zuul.
Oct 11 04:45:25 compute-0 systemd[1]: Started Session 53 of User zuul.
Oct 11 04:45:25 compute-0 sshd-session[253682]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:45:25 compute-0 sshd-session[253685]: Received disconnect from 192.168.122.30 port 59588:11: disconnected by user
Oct 11 04:45:25 compute-0 sshd-session[253685]: Disconnected from user zuul 192.168.122.30 port 59588
Oct 11 04:45:25 compute-0 sshd-session[253682]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:45:25 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Oct 11 04:45:25 compute-0 systemd-logind[801]: Session 53 logged out. Waiting for processes to exit.
Oct 11 04:45:25 compute-0 systemd-logind[801]: Removed session 53.
Oct 11 04:45:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:45:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:45:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:45:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:45:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:45:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:45:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:26 compute-0 python3.9[253835]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:45:27 compute-0 python3.9[253956]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760157926.0380366-1555-55162209620188/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:45:27 compute-0 ceph-mon[74243]: pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:27 compute-0 python3.9[254106]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:45:28 compute-0 python3.9[254182]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:45:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:28 compute-0 python3.9[254332]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:45:29 compute-0 ceph-mon[74243]: pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:29 compute-0 python3.9[254453]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760157928.5876899-1555-96979010428225/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:45:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:30 compute-0 python3.9[254603]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:45:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:31 compute-0 python3.9[254724]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760157929.8053408-1555-276760976279524/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:45:31 compute-0 ceph-mon[74243]: pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:31 compute-0 python3.9[254874]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:45:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:32 compute-0 python3.9[254995]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760157931.3155613-1555-186662430506630/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:45:33 compute-0 sudo[255145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpfiyetjfieceyxhsiluyeohlmuutwld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157932.7654307-1624-280981122652295/AnsiballZ_file.py'
Oct 11 04:45:33 compute-0 sudo[255145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:33 compute-0 python3.9[255147]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:45:33 compute-0 sudo[255145]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:33 compute-0 ceph-mon[74243]: pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:33 compute-0 sudo[255297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rblruhcqjrbpeuhtaduvbhykwirvbllu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157933.504955-1632-140890316480323/AnsiballZ_copy.py'
Oct 11 04:45:33 compute-0 sudo[255297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:33 compute-0 python3.9[255299]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:45:34 compute-0 sudo[255297]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:34 compute-0 sudo[255449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywskhvadelrkvjixrqdprqufnxyqkjef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157934.228858-1640-259127547912497/AnsiballZ_stat.py'
Oct 11 04:45:34 compute-0 sudo[255449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:34 compute-0 python3.9[255451]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:45:34 compute-0 sudo[255449]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:35 compute-0 sudo[255601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufvsmebruihefjunxvntabqlhghfzpcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157935.0579066-1648-266189083642118/AnsiballZ_stat.py'
Oct 11 04:45:35 compute-0 sudo[255601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:35 compute-0 ceph-mon[74243]: pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:35 compute-0 python3.9[255603]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:45:35 compute-0 sudo[255601]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:36 compute-0 sudo[255724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfmytutordmsyariphfxwpjcianknryf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157935.0579066-1648-266189083642118/AnsiballZ_copy.py'
Oct 11 04:45:36 compute-0 sudo[255724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:36 compute-0 python3.9[255726]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1760157935.0579066-1648-266189083642118/.source _original_basename=.ea7kjse1 follow=False checksum=535474d9f354f3f7030c5aedb8d7ef9ea54c4228 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 11 04:45:36 compute-0 sudo[255724]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:37 compute-0 python3.9[255878]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:45:37 compute-0 ceph-mon[74243]: pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:37 compute-0 python3.9[256030]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:45:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:38 compute-0 python3.9[256151]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760157937.4583318-1674-99036813227601/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=f022386746472553146d29f689b545df70fa8a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:45:38 compute-0 ceph-mon[74243]: pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:39 compute-0 python3.9[256301]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:45:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:40 compute-0 python3.9[256422]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760157938.7617497-1689-6335838927770/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:45:40 compute-0 podman[256448]: 2025-10-11 04:45:40.447681536 +0000 UTC m=+0.090810991 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 04:45:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:40 compute-0 podman[256447]: 2025-10-11 04:45:40.52164632 +0000 UTC m=+0.169864104 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:45:40 compute-0 sudo[256615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-busmmxbfddonzoddwsdpcbzgiouosksb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157940.4090338-1706-130215652115126/AnsiballZ_container_config_data.py'
Oct 11 04:45:40 compute-0 sudo[256615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:40 compute-0 python3.9[256617]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 11 04:45:40 compute-0 sudo[256615]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:41 compute-0 sudo[256786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikiuqvfbpqnkjxaagufksnxlanjuzlse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157941.14925-1715-66844428392134/AnsiballZ_container_config_hash.py'
Oct 11 04:45:41 compute-0 sudo[256786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:41 compute-0 podman[256722]: 2025-10-11 04:45:41.457475819 +0000 UTC m=+0.099463691 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 04:45:41 compute-0 ceph-mon[74243]: pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:41 compute-0 python3.9[256789]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 04:45:41 compute-0 sudo[256786]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:42 compute-0 sudo[256940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbaxlgrngfsnxlypvzjxvslzseguuokf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760157941.9816186-1725-136404879046994/AnsiballZ_edpm_container_manage.py'
Oct 11 04:45:42 compute-0 sudo[256940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:42 compute-0 python3[256942]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 04:45:43 compute-0 ceph-mon[74243]: pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:45 compute-0 ceph-mon[74243]: pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:46 compute-0 ceph-mon[74243]: pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:47 compute-0 podman[256995]: 2025-10-11 04:45:47.820132132 +0000 UTC m=+1.478497088 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 04:45:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:49 compute-0 ceph-mon[74243]: pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:51 compute-0 ceph-mon[74243]: pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:52 compute-0 podman[256955]: 2025-10-11 04:45:52.165696823 +0000 UTC m=+9.413585787 image pull 95311272d2962a6b8537a6d19b94bc44c5c3621a6e21a2e983fd64d147646bc9 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 11 04:45:52 compute-0 podman[257059]: 2025-10-11 04:45:52.379395737 +0000 UTC m=+0.060367670 container create bcdbe5eae2ca7782143f66ad3decc6d019b5a90e321aa46ed7cdf097adac1474 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:45:52 compute-0 podman[257059]: 2025-10-11 04:45:52.354053345 +0000 UTC m=+0.035025298 image pull 95311272d2962a6b8537a6d19b94bc44c5c3621a6e21a2e983fd64d147646bc9 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 11 04:45:52 compute-0 python3[256942]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 11 04:45:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:52 compute-0 sudo[256940]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:52 compute-0 ceph-mon[74243]: pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:53 compute-0 sudo[257248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aozbmmvsukcicwocjzozxkdaannhmfca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157952.7524629-1733-118517001782830/AnsiballZ_stat.py'
Oct 11 04:45:53 compute-0 sudo[257248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:53 compute-0 python3.9[257250]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:45:53 compute-0 sudo[257248]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:54 compute-0 sudo[257402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjunvfwonqrzbdygsygueuzomgnspxwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157953.8170247-1745-52191308193003/AnsiballZ_container_config_data.py'
Oct 11 04:45:54 compute-0 sudo[257402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:54 compute-0 python3.9[257404]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 11 04:45:54 compute-0 sudo[257402]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:54 compute-0 sudo[257554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuaggrctqlfdnoaxrsovzgkxugvowalf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157954.5888193-1754-228472689142469/AnsiballZ_container_config_hash.py'
Oct 11 04:45:54 compute-0 sudo[257554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:55 compute-0 python3.9[257556]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 04:45:55 compute-0 sudo[257554]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:55 compute-0 ceph-mon[74243]: pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:55 compute-0 sudo[257706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvgqiqhchgfaxhuthvpeiunmnkpqfxyi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760157955.438719-1764-264872010657463/AnsiballZ_edpm_container_manage.py'
Oct 11 04:45:55 compute-0 sudo[257706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:56 compute-0 python3[257708]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:45:56
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['images', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'volumes', 'default.rgw.meta', '.rgw.root', 'vms']
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:45:56 compute-0 podman[257745]: 2025-10-11 04:45:56.338539159 +0000 UTC m=+0.058698908 container create 6aee11c0fb7a0814bfc733ee23f52cb5b809b26bfefaaad4d38eb786adf49195 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:45:56 compute-0 podman[257745]: 2025-10-11 04:45:56.31095287 +0000 UTC m=+0.031112609 image pull 95311272d2962a6b8537a6d19b94bc44c5c3621a6e21a2e983fd64d147646bc9 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 11 04:45:56 compute-0 python3[257708]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Oct 11 04:45:56 compute-0 sudo[257706]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:57 compute-0 sudo[257933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyrazbzworvneoqqumfbptnzcjkvyhiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157956.6827495-1772-74818041238306/AnsiballZ_stat.py'
Oct 11 04:45:57 compute-0 sudo[257933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:57 compute-0 python3.9[257935]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:45:57 compute-0 sudo[257933]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:57 compute-0 ceph-mon[74243]: pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:57 compute-0 sudo[258087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbjqarqiywlghlbbbkdnxivfdayggjqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157957.5709193-1781-133407227043814/AnsiballZ_file.py'
Oct 11 04:45:57 compute-0 sudo[258087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:58 compute-0 python3.9[258089]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:45:58 compute-0 sudo[258087]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:58 compute-0 sudo[258238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zntfmddqvlaoauaydykeuyswanpolept ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157958.2387025-1781-239282475016537/AnsiballZ_copy.py'
Oct 11 04:45:58 compute-0 sudo[258238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:58 compute-0 python3.9[258240]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760157958.2387025-1781-239282475016537/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:45:58 compute-0 sudo[258238]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:59 compute-0 sudo[258314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arhjuxdgnvtxlrdzoxydbzffucdaapll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157958.2387025-1781-239282475016537/AnsiballZ_systemd.py'
Oct 11 04:45:59 compute-0 sudo[258314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:45:59 compute-0 python3.9[258316]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 04:45:59 compute-0 systemd[1]: Reloading.
Oct 11 04:45:59 compute-0 ceph-mon[74243]: pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:45:59 compute-0 systemd-sysv-generator[258347]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:45:59 compute-0 systemd-rc-local-generator[258344]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:45:59 compute-0 sudo[258314]: pam_unix(sudo:session): session closed for user root
Oct 11 04:45:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:46:00 compute-0 sudo[258425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxprbvmmjldlukoikvdysdkzmxtssvqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157958.2387025-1781-239282475016537/AnsiballZ_systemd.py'
Oct 11 04:46:00 compute-0 sudo[258425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:46:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:00 compute-0 python3.9[258427]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:46:01 compute-0 ceph-mon[74243]: pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:01 compute-0 systemd[1]: Reloading.
Oct 11 04:46:01 compute-0 systemd-sysv-generator[258456]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:46:01 compute-0 systemd-rc-local-generator[258452]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:46:02 compute-0 systemd[1]: Starting nova_compute container...
Oct 11 04:46:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ddaeaf7fe4ec8e47b61b47a0a38d9fe24032f087eba5abd94e2e059a6569d4/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ddaeaf7fe4ec8e47b61b47a0a38d9fe24032f087eba5abd94e2e059a6569d4/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ddaeaf7fe4ec8e47b61b47a0a38d9fe24032f087eba5abd94e2e059a6569d4/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ddaeaf7fe4ec8e47b61b47a0a38d9fe24032f087eba5abd94e2e059a6569d4/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ddaeaf7fe4ec8e47b61b47a0a38d9fe24032f087eba5abd94e2e059a6569d4/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:02 compute-0 podman[258467]: 2025-10-11 04:46:02.314798123 +0000 UTC m=+0.130905048 container init 6aee11c0fb7a0814bfc733ee23f52cb5b809b26bfefaaad4d38eb786adf49195 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:46:02 compute-0 podman[258467]: 2025-10-11 04:46:02.327956576 +0000 UTC m=+0.144063451 container start 6aee11c0fb7a0814bfc733ee23f52cb5b809b26bfefaaad4d38eb786adf49195 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:46:02 compute-0 podman[258467]: nova_compute
Oct 11 04:46:02 compute-0 nova_compute[258482]: + sudo -E kolla_set_configs
Oct 11 04:46:02 compute-0 systemd[1]: Started nova_compute container.
Oct 11 04:46:02 compute-0 sudo[258425]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Validating config file
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Copying service configuration files
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Deleting /etc/ceph
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Creating directory /etc/ceph
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Setting permission for /etc/ceph
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Writing out command to execute
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 11 04:46:02 compute-0 nova_compute[258482]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 11 04:46:02 compute-0 nova_compute[258482]: ++ cat /run_command
Oct 11 04:46:02 compute-0 nova_compute[258482]: + CMD=nova-compute
Oct 11 04:46:02 compute-0 nova_compute[258482]: + ARGS=
Oct 11 04:46:02 compute-0 nova_compute[258482]: + sudo kolla_copy_cacerts
Oct 11 04:46:02 compute-0 nova_compute[258482]: + [[ ! -n '' ]]
Oct 11 04:46:02 compute-0 nova_compute[258482]: + . kolla_extend_start
Oct 11 04:46:02 compute-0 nova_compute[258482]: Running command: 'nova-compute'
Oct 11 04:46:02 compute-0 nova_compute[258482]: + echo 'Running command: '\''nova-compute'\'''
Oct 11 04:46:02 compute-0 nova_compute[258482]: + umask 0022
Oct 11 04:46:02 compute-0 nova_compute[258482]: + exec nova-compute
Oct 11 04:46:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:03 compute-0 python3.9[258643]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:46:03 compute-0 ceph-mon[74243]: pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:04 compute-0 python3.9[258794]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:46:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:04 compute-0 nova_compute[258482]: 2025-10-11 04:46:04.703 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 11 04:46:04 compute-0 nova_compute[258482]: 2025-10-11 04:46:04.703 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 11 04:46:04 compute-0 nova_compute[258482]: 2025-10-11 04:46:04.703 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 11 04:46:04 compute-0 nova_compute[258482]: 2025-10-11 04:46:04.703 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 11 04:46:04 compute-0 nova_compute[258482]: 2025-10-11 04:46:04.853 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:46:04 compute-0 nova_compute[258482]: 2025-10-11 04:46:04.881 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:46:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:46:05 compute-0 python3.9[258948]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.592 2 INFO nova.virt.driver [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 11 04:46:05 compute-0 ceph-mon[74243]: pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.734 2 INFO nova.compute.provider_config [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.750 2 DEBUG oslo_concurrency.lockutils [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.751 2 DEBUG oslo_concurrency.lockutils [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.751 2 DEBUG oslo_concurrency.lockutils [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.751 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.752 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 11 04:46:05 compute-0 sudo[259098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xguufuxpesvgyspytbrgqyulukahyzzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157965.4272478-1841-158868259732080/AnsiballZ_podman_container.py'
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.752 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.752 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.752 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.752 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.752 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.753 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.753 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.753 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.753 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.753 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.753 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.753 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.754 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.754 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.754 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.754 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.754 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.754 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.754 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.755 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.755 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.755 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.755 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.755 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.755 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.755 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.756 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.756 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.756 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.756 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.756 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.756 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 sudo[259098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.757 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.757 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.757 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.757 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.757 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.757 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.758 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.758 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.758 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.758 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.758 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.758 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.758 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.759 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.759 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.759 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.759 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.759 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.759 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.759 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.760 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.760 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.760 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.760 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.760 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.760 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.760 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.761 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.761 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.761 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.761 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.761 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.761 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.761 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.762 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.762 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.762 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.762 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.762 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.762 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.762 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.762 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.763 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.763 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.763 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.763 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.763 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.763 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.763 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.764 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.764 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.764 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.764 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.764 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.764 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.764 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.765 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.765 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.765 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.765 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.765 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.765 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.765 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.766 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.766 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.766 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.766 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.766 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.766 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.766 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.767 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.767 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.767 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.767 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.767 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.767 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.767 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.767 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.768 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.768 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.768 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.768 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.768 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.768 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.768 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.769 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.769 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.769 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.769 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.769 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.769 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.769 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.770 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.770 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.770 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.770 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.770 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.770 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.770 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.771 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.771 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.771 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.771 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.771 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.771 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.771 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.772 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.772 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.772 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.772 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.772 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.772 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.772 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.773 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.773 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.773 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.773 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.773 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.773 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.773 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.774 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.774 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.774 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.774 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.774 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.774 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.774 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.775 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.775 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.775 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.775 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.775 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.775 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.775 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.776 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.776 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.776 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.776 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.776 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.776 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.776 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.776 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.777 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.777 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.777 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.777 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.777 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.777 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.778 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.778 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.778 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.778 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.778 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.778 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.778 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.778 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.779 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.779 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.779 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.779 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.779 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.779 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.779 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.780 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.780 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.780 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.780 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.780 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.780 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.780 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.780 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.781 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.781 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.781 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.781 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.781 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.781 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.781 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.782 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.782 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.782 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.782 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.782 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.782 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.782 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.783 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.783 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.783 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.783 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.783 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.783 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.783 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.783 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.784 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.784 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.784 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.784 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.784 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.784 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.784 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.785 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.785 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.785 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.785 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.785 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.785 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.785 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.786 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.786 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.786 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.786 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.786 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.786 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.786 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.786 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.787 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.787 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.787 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.787 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.787 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.787 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.787 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.788 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.788 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.788 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.788 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.788 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.788 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.788 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.788 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.789 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.789 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.789 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.789 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.790 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.790 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.790 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.790 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.790 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.790 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.790 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.791 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.791 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.791 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.791 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.791 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.791 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.791 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.792 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.792 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.792 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.792 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.792 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.792 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.792 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.793 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.793 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.793 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.793 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.793 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.793 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.793 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.794 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.794 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.794 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.794 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.794 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.794 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.794 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.794 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.795 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.795 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.795 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.795 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.795 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.795 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.795 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.796 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.796 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.796 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.796 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.796 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.796 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.796 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.797 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.797 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.797 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.797 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.797 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.797 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.797 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.798 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.798 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.798 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.798 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.798 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.798 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.798 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.798 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.799 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.799 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.799 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.799 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.799 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.799 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.799 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.800 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.800 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.800 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.800 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.800 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.800 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.800 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.801 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.801 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.801 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.801 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.801 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.801 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.801 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.802 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.802 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.802 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.802 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.802 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.802 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.803 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.803 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.803 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.803 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.803 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.803 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.803 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.803 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.804 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.804 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.804 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.804 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.804 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.804 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.804 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.805 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.805 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.805 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.805 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.805 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.805 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.805 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.805 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.806 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.806 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.806 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.806 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.806 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.806 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.806 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.807 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.807 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.807 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.807 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.807 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.807 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.807 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.808 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.808 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.808 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.808 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.808 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.808 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.808 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.808 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.809 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.809 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.809 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.809 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.809 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.809 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.809 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.810 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.810 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.810 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.810 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.810 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.810 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.810 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.811 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.811 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.811 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.811 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.811 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.811 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.811 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.811 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.812 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.812 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.812 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.812 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.812 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.812 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.812 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.813 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.813 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.813 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.813 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.813 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.813 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.813 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.813 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.814 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.814 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.814 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.814 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.814 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.814 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.814 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.815 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.815 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.815 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.815 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.815 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.815 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.815 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.816 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.816 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.816 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.816 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.816 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.816 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.816 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.816 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.817 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.817 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.817 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.817 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.817 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.817 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.817 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.818 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.818 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.818 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.818 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.818 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.818 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.818 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.819 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.819 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.819 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.819 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.819 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.819 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.819 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.819 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.820 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.820 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.820 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.820 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.820 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.820 2 WARNING oslo_config.cfg [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 11 04:46:05 compute-0 nova_compute[258482]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 11 04:46:05 compute-0 nova_compute[258482]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 11 04:46:05 compute-0 nova_compute[258482]: and ``live_migration_inbound_addr`` respectively.
Oct 11 04:46:05 compute-0 nova_compute[258482]: ).  Its value may be silently ignored in the future.
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.821 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.821 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.821 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.821 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.821 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.821 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.822 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.822 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.822 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.822 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.822 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.822 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.822 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.823 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.823 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.823 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.823 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.823 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.823 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.rbd_secret_uuid        = 166d0489-2ae7-59eb-961c-c1b5cda4b45a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.823 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.824 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.824 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.824 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.824 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.824 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.824 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.824 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.825 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.825 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.825 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.825 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.825 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.825 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.825 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.826 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.826 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.826 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.826 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.826 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.826 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.826 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.827 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.827 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.827 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.827 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.827 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.827 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.827 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.828 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.828 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.828 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.828 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.828 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.828 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.828 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.828 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.829 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.829 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.829 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.829 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.829 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.829 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.829 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.830 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.830 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.830 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.830 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.830 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.830 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.830 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.831 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.831 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.831 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.831 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.831 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.831 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.831 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.831 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.832 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.832 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.832 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.832 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.832 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.832 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.832 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.833 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.833 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.833 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.833 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.833 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.833 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.833 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.834 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.834 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.834 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.834 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.834 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.834 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.834 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.834 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.835 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.835 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.835 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.835 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.835 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.835 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.835 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.836 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.836 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.836 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.836 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.836 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.836 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.836 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.836 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.837 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.837 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.837 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.837 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.837 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.837 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.837 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.838 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.838 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.838 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.838 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.838 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.838 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.838 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.839 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.839 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.839 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.839 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.839 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.839 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.839 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.839 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.840 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.840 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.840 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.840 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.840 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.840 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.841 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.841 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.841 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.841 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.841 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.841 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.841 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.842 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.842 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.842 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.842 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.842 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.842 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.842 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.843 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.843 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.843 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.843 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.843 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.843 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.843 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.844 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.844 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.844 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.844 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.844 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.844 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.844 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.844 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.845 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.845 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.845 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.845 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.845 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.845 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.845 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.846 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.846 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.846 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.846 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.846 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.846 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.847 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.847 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.847 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.847 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.847 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.847 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.847 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.848 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.848 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.848 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.848 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.848 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.848 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.848 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.849 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.849 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.849 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.849 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.849 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.849 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.849 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.850 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.850 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.850 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.850 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.850 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.850 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.850 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.850 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.851 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.851 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.851 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.851 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.851 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.851 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.851 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.852 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.852 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.852 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.852 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.852 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.852 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.852 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.852 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.853 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.853 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.853 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.853 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.853 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.853 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.853 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.854 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.854 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.854 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.854 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.854 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.854 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.854 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.854 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.855 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.855 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.855 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.855 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.855 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.855 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.856 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.856 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.856 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.856 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.856 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.856 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.857 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.857 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.857 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.857 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.857 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.857 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.857 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.857 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.858 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.858 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.858 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.858 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.858 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.858 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.858 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.859 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.859 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.859 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.859 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.859 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.859 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.859 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.860 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.860 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.860 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.860 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.860 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.860 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.861 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.861 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.861 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.861 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.861 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.861 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.861 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.862 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.862 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.862 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.862 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.862 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.862 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.862 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.863 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.863 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.863 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.863 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.863 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.863 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.863 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.864 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.864 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.864 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.864 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.864 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.864 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.864 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.865 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.865 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.865 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.865 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.865 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.865 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.865 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.865 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.866 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.866 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.866 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.866 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.866 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.866 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.866 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.867 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.867 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.867 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.867 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.867 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.867 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.867 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.868 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.868 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.868 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.868 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.868 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.868 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.868 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.869 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.869 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.869 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.869 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.869 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.869 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.869 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.870 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.870 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.870 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.870 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.870 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.870 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.870 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.870 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.871 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.871 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.871 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.871 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.871 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.871 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.871 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.871 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.872 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.872 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.872 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.872 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.872 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.872 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.872 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.873 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.873 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.873 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.873 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.873 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.873 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.873 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.874 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.874 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.874 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.874 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.874 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.874 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.874 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.875 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.875 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.875 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.875 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.875 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.875 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.875 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.875 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.876 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.876 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.876 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.876 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.876 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.876 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.876 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.877 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.877 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.877 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.877 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.877 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.877 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.877 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.878 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.878 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.878 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.878 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.878 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.878 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.878 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.878 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.879 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.879 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.879 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.879 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.879 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.879 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.880 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.880 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.880 2 DEBUG oslo_service.service [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.881 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.897 2 DEBUG nova.virt.libvirt.host [None req-e068206c-73fa-4cc8-9648-10877f36cf47 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.897 2 DEBUG nova.virt.libvirt.host [None req-e068206c-73fa-4cc8-9648-10877f36cf47 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.898 2 DEBUG nova.virt.libvirt.host [None req-e068206c-73fa-4cc8-9648-10877f36cf47 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.898 2 DEBUG nova.virt.libvirt.host [None req-e068206c-73fa-4cc8-9648-10877f36cf47 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 11 04:46:05 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 11 04:46:05 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 11 04:46:05 compute-0 python3.9[259100]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.996 2 DEBUG nova.virt.libvirt.host [None req-e068206c-73fa-4cc8-9648-10877f36cf47 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f8798d10e50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 11 04:46:05 compute-0 nova_compute[258482]: 2025-10-11 04:46:05.999 2 DEBUG nova.virt.libvirt.host [None req-e068206c-73fa-4cc8-9648-10877f36cf47 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f8798d10e50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 11 04:46:05 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:46:06 compute-0 nova_compute[258482]: 2025-10-11 04:46:06.000 2 INFO nova.virt.libvirt.driver [None req-e068206c-73fa-4cc8-9648-10877f36cf47 - - - - - -] Connection event '1' reason 'None'
Oct 11 04:46:06 compute-0 nova_compute[258482]: 2025-10-11 04:46:06.015 2 WARNING nova.virt.libvirt.driver [None req-e068206c-73fa-4cc8-9648-10877f36cf47 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 11 04:46:06 compute-0 nova_compute[258482]: 2025-10-11 04:46:06.016 2 DEBUG nova.virt.libvirt.volume.mount [None req-e068206c-73fa-4cc8-9648-10877f36cf47 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 11 04:46:06 compute-0 sudo[259098]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:46:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:06 compute-0 sudo[259332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgvllfheewmfjavgknlwdxwptjdbfbfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157966.284258-1849-130728209558291/AnsiballZ_systemd.py'
Oct 11 04:46:06 compute-0 sudo[259332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:46:07 compute-0 python3.9[259334]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:46:07 compute-0 nova_compute[258482]: 2025-10-11 04:46:07.076 2 INFO nova.virt.libvirt.host [None req-e068206c-73fa-4cc8-9648-10877f36cf47 - - - - - -] Libvirt host capabilities <capabilities>
Oct 11 04:46:07 compute-0 nova_compute[258482]: 
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <host>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <uuid>53cb9e9d-2668-4473-9499-ec86a0f02be2</uuid>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <cpu>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <arch>x86_64</arch>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model>EPYC-Rome-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <vendor>AMD</vendor>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <microcode version='16777317'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <signature family='23' model='49' stepping='0'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='x2apic'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='tsc-deadline'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='osxsave'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='hypervisor'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='tsc_adjust'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='spec-ctrl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='stibp'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='arch-capabilities'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='ssbd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='cmp_legacy'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='topoext'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='virt-ssbd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='lbrv'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='tsc-scale'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='vmcb-clean'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='pause-filter'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='pfthreshold'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='svme-addr-chk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='rdctl-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='skip-l1dfl-vmentry'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='mds-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature name='pschange-mc-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <pages unit='KiB' size='4'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <pages unit='KiB' size='2048'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <pages unit='KiB' size='1048576'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </cpu>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <power_management>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <suspend_mem/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </power_management>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <iommu support='no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <migration_features>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <live/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <uri_transports>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <uri_transport>tcp</uri_transport>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <uri_transport>rdma</uri_transport>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </uri_transports>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </migration_features>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <topology>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <cells num='1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <cell id='0'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:           <memory unit='KiB'>7864344</memory>
Oct 11 04:46:07 compute-0 nova_compute[258482]:           <pages unit='KiB' size='4'>1966086</pages>
Oct 11 04:46:07 compute-0 nova_compute[258482]:           <pages unit='KiB' size='2048'>0</pages>
Oct 11 04:46:07 compute-0 nova_compute[258482]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 11 04:46:07 compute-0 nova_compute[258482]:           <distances>
Oct 11 04:46:07 compute-0 nova_compute[258482]:             <sibling id='0' value='10'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:           </distances>
Oct 11 04:46:07 compute-0 nova_compute[258482]:           <cpus num='8'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:           </cpus>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         </cell>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </cells>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </topology>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <cache>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </cache>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <secmodel>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model>selinux</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <doi>0</doi>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </secmodel>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <secmodel>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model>dac</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <doi>0</doi>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </secmodel>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   </host>
Oct 11 04:46:07 compute-0 nova_compute[258482]: 
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <guest>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <os_type>hvm</os_type>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <arch name='i686'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <wordsize>32</wordsize>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <domain type='qemu'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <domain type='kvm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </arch>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <features>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <pae/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <nonpae/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <acpi default='on' toggle='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <apic default='on' toggle='no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <cpuselection/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <deviceboot/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <disksnapshot default='on' toggle='no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <externalSnapshot/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </features>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   </guest>
Oct 11 04:46:07 compute-0 nova_compute[258482]: 
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <guest>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <os_type>hvm</os_type>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <arch name='x86_64'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <wordsize>64</wordsize>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <domain type='qemu'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <domain type='kvm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </arch>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <features>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <acpi default='on' toggle='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <apic default='on' toggle='no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <cpuselection/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <deviceboot/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <disksnapshot default='on' toggle='no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <externalSnapshot/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </features>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   </guest>
Oct 11 04:46:07 compute-0 nova_compute[258482]: 
Oct 11 04:46:07 compute-0 nova_compute[258482]: </capabilities>
Oct 11 04:46:07 compute-0 nova_compute[258482]: 
Oct 11 04:46:07 compute-0 systemd[1]: Stopping nova_compute container...
Oct 11 04:46:07 compute-0 nova_compute[258482]: 2025-10-11 04:46:07.087 2 DEBUG nova.virt.libvirt.host [None req-e068206c-73fa-4cc8-9648-10877f36cf47 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 11 04:46:07 compute-0 nova_compute[258482]: 2025-10-11 04:46:07.113 2 DEBUG nova.virt.libvirt.host [None req-e068206c-73fa-4cc8-9648-10877f36cf47 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 11 04:46:07 compute-0 nova_compute[258482]: <domainCapabilities>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <path>/usr/libexec/qemu-kvm</path>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <domain>kvm</domain>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <arch>i686</arch>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <vcpu max='240'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <iothreads supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <os supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <enum name='firmware'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <loader supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='type'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>rom</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>pflash</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='readonly'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>yes</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>no</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='secure'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>no</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </loader>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   </os>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <cpu>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <mode name='host-passthrough' supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='hostPassthroughMigratable'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>on</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>off</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </mode>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <mode name='maximum' supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='maximumMigratable'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>on</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>off</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </mode>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <mode name='host-model' supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <vendor>AMD</vendor>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='x2apic'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='tsc-deadline'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='hypervisor'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='tsc_adjust'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='spec-ctrl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='stibp'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='arch-capabilities'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='ssbd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='cmp_legacy'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='overflow-recov'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='succor'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='ibrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='amd-ssbd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='virt-ssbd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='lbrv'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='tsc-scale'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='vmcb-clean'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='flushbyasid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='pause-filter'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='pfthreshold'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='svme-addr-chk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='rdctl-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='mds-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='gds-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='rfds-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='disable' name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </mode>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <mode name='custom' supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-noTSX'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server-v5'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cooperlake'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cooperlake-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cooperlake-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Denverton'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mpx'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Denverton-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mpx'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Denverton-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Denverton-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Dhyana-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Genoa'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amd-psfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='auto-ibrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='stibp-always-on'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Genoa-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amd-psfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='auto-ibrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='stibp-always-on'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Milan'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Milan-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Milan-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amd-psfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='stibp-always-on'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Rome'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Rome-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Rome-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Rome-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='GraniteRapids'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mcdt-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pbrsb-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='prefetchiti'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='GraniteRapids-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mcdt-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pbrsb-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='prefetchiti'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='GraniteRapids-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx10'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx10-128'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx10-256'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx10-512'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mcdt-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pbrsb-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='prefetchiti'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-noTSX'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-noTSX'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v5'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v6'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v7'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='IvyBridge'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='IvyBridge-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='IvyBridge-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='IvyBridge-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='KnightsMill'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-4fmaps'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-4vnniw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512er'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512pf'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='KnightsMill-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-4fmaps'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-4vnniw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512er'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512pf'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Opteron_G4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fma4'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xop'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Opteron_G4-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fma4'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xop'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Opteron_G5'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fma4'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tbm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xop'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Opteron_G5-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fma4'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tbm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xop'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='SapphireRapids'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='SapphireRapids-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='SapphireRapids-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='SapphireRapids-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='SierraForest'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-ne-convert'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cmpccxadd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mcdt-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pbrsb-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='SierraForest-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-ne-convert'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cmpccxadd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mcdt-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pbrsb-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-v5'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Snowridge'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='core-capability'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mpx'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='split-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Snowridge-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='core-capability'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mpx'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='split-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Snowridge-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='core-capability'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='split-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Snowridge-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='core-capability'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='split-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Snowridge-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='athlon'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnow'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnowext'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='athlon-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnow'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnowext'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='core2duo'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='core2duo-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='coreduo'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='coreduo-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='n270'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='n270-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='phenom'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnow'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnowext'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='phenom-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnow'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnowext'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </mode>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   </cpu>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <memoryBacking supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <enum name='sourceType'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <value>file</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <value>anonymous</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <value>memfd</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   </memoryBacking>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <devices>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <disk supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='diskDevice'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>disk</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>cdrom</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>floppy</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>lun</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='bus'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>ide</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>fdc</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>scsi</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>usb</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>sata</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='model'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio-transitional</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio-non-transitional</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </disk>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <graphics supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='type'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>vnc</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>egl-headless</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>dbus</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </graphics>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <video supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='modelType'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>vga</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>cirrus</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>none</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>bochs</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>ramfb</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </video>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <hostdev supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='mode'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>subsystem</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='startupPolicy'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>default</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>mandatory</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>requisite</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>optional</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='subsysType'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>usb</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>pci</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>scsi</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='capsType'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='pciBackend'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </hostdev>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <rng supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='model'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio-transitional</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio-non-transitional</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='backendModel'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>random</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>egd</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>builtin</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </rng>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <filesystem supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='driverType'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>path</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>handle</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtiofs</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </filesystem>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <tpm supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='model'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>tpm-tis</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>tpm-crb</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='backendModel'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>emulator</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>external</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='backendVersion'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>2.0</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </tpm>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <redirdev supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='bus'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>usb</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </redirdev>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <channel supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='type'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>pty</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>unix</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </channel>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <crypto supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='model'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='type'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>qemu</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='backendModel'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>builtin</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </crypto>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <interface supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='backendType'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>default</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>passt</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </interface>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <panic supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='model'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>isa</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>hyperv</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </panic>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   </devices>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <features>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <gic supported='no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <vmcoreinfo supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <genid supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <backingStoreInput supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <backup supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <async-teardown supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <ps2 supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <sev supported='no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <sgx supported='no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <hyperv supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='features'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>relaxed</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>vapic</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>spinlocks</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>vpindex</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>runtime</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>synic</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>stimer</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>reset</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>vendor_id</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>frequencies</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>reenlightenment</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>tlbflush</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>ipi</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>avic</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>emsr_bitmap</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>xmm_input</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </hyperv>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <launchSecurity supported='no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   </features>
Oct 11 04:46:07 compute-0 nova_compute[258482]: </domainCapabilities>
Oct 11 04:46:07 compute-0 nova_compute[258482]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 11 04:46:07 compute-0 nova_compute[258482]: 2025-10-11 04:46:07.118 2 DEBUG nova.virt.libvirt.host [None req-e068206c-73fa-4cc8-9648-10877f36cf47 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 11 04:46:07 compute-0 nova_compute[258482]: <domainCapabilities>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <path>/usr/libexec/qemu-kvm</path>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <domain>kvm</domain>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <arch>i686</arch>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <vcpu max='4096'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <iothreads supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <os supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <enum name='firmware'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <loader supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='type'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>rom</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>pflash</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='readonly'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>yes</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>no</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='secure'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>no</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </loader>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   </os>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <cpu>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <mode name='host-passthrough' supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='hostPassthroughMigratable'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>on</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>off</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </mode>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <mode name='maximum' supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='maximumMigratable'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>on</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>off</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </mode>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <mode name='host-model' supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <vendor>AMD</vendor>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='x2apic'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='tsc-deadline'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='hypervisor'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='tsc_adjust'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='spec-ctrl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='stibp'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='arch-capabilities'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='ssbd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='cmp_legacy'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='overflow-recov'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='succor'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='ibrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='amd-ssbd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='virt-ssbd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='lbrv'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='tsc-scale'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='vmcb-clean'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='flushbyasid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='pause-filter'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='pfthreshold'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='svme-addr-chk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='rdctl-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='mds-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='gds-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='rfds-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='disable' name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </mode>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <mode name='custom' supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-noTSX'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server-v5'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cooperlake'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cooperlake-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cooperlake-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Denverton'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mpx'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Denverton-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mpx'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Denverton-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Denverton-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Dhyana-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Genoa'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amd-psfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='auto-ibrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='stibp-always-on'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Genoa-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amd-psfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='auto-ibrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='stibp-always-on'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Milan'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Milan-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Milan-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amd-psfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='stibp-always-on'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Rome'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Rome-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Rome-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Rome-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='GraniteRapids'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mcdt-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pbrsb-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='prefetchiti'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='GraniteRapids-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mcdt-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pbrsb-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='prefetchiti'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='GraniteRapids-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx10'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx10-128'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx10-256'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx10-512'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mcdt-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pbrsb-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='prefetchiti'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-noTSX'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-noTSX'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v5'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v6'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v7'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='IvyBridge'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='IvyBridge-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='IvyBridge-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='IvyBridge-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='KnightsMill'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-4fmaps'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-4vnniw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512er'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512pf'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='KnightsMill-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-4fmaps'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-4vnniw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512er'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512pf'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Opteron_G4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fma4'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xop'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Opteron_G4-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fma4'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xop'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Opteron_G5'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fma4'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tbm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xop'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Opteron_G5-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fma4'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tbm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xop'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='SapphireRapids'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='SapphireRapids-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='SapphireRapids-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='SapphireRapids-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='SierraForest'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-ne-convert'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cmpccxadd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mcdt-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pbrsb-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='SierraForest-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-ne-convert'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cmpccxadd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mcdt-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pbrsb-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-v5'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Snowridge'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='core-capability'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mpx'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='split-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Snowridge-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='core-capability'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mpx'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='split-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Snowridge-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='core-capability'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='split-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Snowridge-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='core-capability'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='split-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Snowridge-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='athlon'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnow'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnowext'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='athlon-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnow'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnowext'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='core2duo'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='core2duo-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='coreduo'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='coreduo-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='n270'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='n270-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='phenom'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnow'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnowext'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='phenom-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnow'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnowext'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </mode>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   </cpu>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <memoryBacking supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <enum name='sourceType'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <value>file</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <value>anonymous</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <value>memfd</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   </memoryBacking>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <devices>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <disk supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='diskDevice'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>disk</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>cdrom</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>floppy</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>lun</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='bus'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>fdc</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>scsi</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>usb</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>sata</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='model'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio-transitional</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio-non-transitional</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </disk>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <graphics supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='type'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>vnc</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>egl-headless</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>dbus</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </graphics>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <video supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='modelType'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>vga</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>cirrus</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>none</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>bochs</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>ramfb</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </video>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <hostdev supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='mode'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>subsystem</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='startupPolicy'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>default</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>mandatory</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>requisite</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>optional</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='subsysType'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>usb</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>pci</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>scsi</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='capsType'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='pciBackend'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </hostdev>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <rng supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='model'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio-transitional</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio-non-transitional</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='backendModel'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>random</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>egd</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>builtin</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </rng>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <filesystem supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='driverType'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>path</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>handle</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtiofs</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </filesystem>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <tpm supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='model'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>tpm-tis</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>tpm-crb</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='backendModel'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>emulator</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>external</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='backendVersion'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>2.0</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </tpm>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <redirdev supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='bus'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>usb</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </redirdev>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <channel supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='type'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>pty</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>unix</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </channel>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <crypto supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='model'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='type'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>qemu</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='backendModel'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>builtin</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </crypto>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <interface supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='backendType'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>default</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>passt</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </interface>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <panic supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='model'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>isa</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>hyperv</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </panic>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   </devices>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <features>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <gic supported='no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <vmcoreinfo supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <genid supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <backingStoreInput supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <backup supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <async-teardown supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <ps2 supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <sev supported='no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <sgx supported='no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <hyperv supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='features'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>relaxed</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>vapic</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>spinlocks</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>vpindex</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>runtime</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>synic</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>stimer</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>reset</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>vendor_id</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>frequencies</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>reenlightenment</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>tlbflush</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>ipi</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>avic</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>emsr_bitmap</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>xmm_input</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </hyperv>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <launchSecurity supported='no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   </features>
Oct 11 04:46:07 compute-0 nova_compute[258482]: </domainCapabilities>
Oct 11 04:46:07 compute-0 nova_compute[258482]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 11 04:46:07 compute-0 nova_compute[258482]: 2025-10-11 04:46:07.143 2 DEBUG nova.virt.libvirt.host [None req-e068206c-73fa-4cc8-9648-10877f36cf47 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 11 04:46:07 compute-0 nova_compute[258482]: 2025-10-11 04:46:07.147 2 DEBUG nova.virt.libvirt.host [None req-e068206c-73fa-4cc8-9648-10877f36cf47 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 11 04:46:07 compute-0 nova_compute[258482]: <domainCapabilities>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <path>/usr/libexec/qemu-kvm</path>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <domain>kvm</domain>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <arch>x86_64</arch>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <vcpu max='240'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <iothreads supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <os supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <enum name='firmware'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <loader supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='type'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>rom</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>pflash</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='readonly'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>yes</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>no</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='secure'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>no</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </loader>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   </os>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <cpu>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <mode name='host-passthrough' supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='hostPassthroughMigratable'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>on</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>off</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </mode>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <mode name='maximum' supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='maximumMigratable'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>on</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>off</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </mode>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <mode name='host-model' supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <vendor>AMD</vendor>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='x2apic'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='tsc-deadline'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='hypervisor'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='tsc_adjust'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='spec-ctrl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='stibp'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='arch-capabilities'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='ssbd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='cmp_legacy'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='overflow-recov'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='succor'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='ibrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='amd-ssbd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='virt-ssbd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='lbrv'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='tsc-scale'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='vmcb-clean'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='flushbyasid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='pause-filter'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='pfthreshold'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='svme-addr-chk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='rdctl-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='mds-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='gds-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='require' name='rfds-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <feature policy='disable' name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </mode>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <mode name='custom' supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-noTSX'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Broadwell-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cascadelake-Server-v5'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cooperlake'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cooperlake-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Cooperlake-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Denverton'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mpx'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Denverton-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mpx'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Denverton-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Denverton-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Dhyana-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Genoa'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amd-psfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='auto-ibrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='stibp-always-on'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Genoa-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amd-psfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='auto-ibrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='stibp-always-on'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Milan'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Milan-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Milan-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amd-psfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='stibp-always-on'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Rome'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Rome-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Rome-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-Rome-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='EPYC-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='GraniteRapids'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mcdt-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pbrsb-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='prefetchiti'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='GraniteRapids-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mcdt-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pbrsb-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='prefetchiti'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='GraniteRapids-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx10'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx10-128'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx10-256'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx10-512'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mcdt-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pbrsb-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='prefetchiti'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-noTSX'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Haswell-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-noTSX'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v5'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v6'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Icelake-Server-v7'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='IvyBridge'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='IvyBridge-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='IvyBridge-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='IvyBridge-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='KnightsMill'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-4fmaps'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-4vnniw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512er'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512pf'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='KnightsMill-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-4fmaps'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-4vnniw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512er'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512pf'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Opteron_G4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fma4'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xop'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Opteron_G4-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fma4'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xop'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Opteron_G5'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fma4'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tbm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xop'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Opteron_G5-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fma4'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tbm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xop'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='SapphireRapids'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='SapphireRapids-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='SapphireRapids-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='SapphireRapids-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='amx-tile'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-bf16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-fp16'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bitalg'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrc'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fzrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='la57'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='taa-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xfd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='SierraForest'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-ne-convert'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cmpccxadd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mcdt-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pbrsb-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='SierraForest-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-ifma'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-ne-convert'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx-vnni-int8'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cmpccxadd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fbsdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='fsrs'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ibrs-all'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mcdt-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pbrsb-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='psdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='serialize'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vaes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Client-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='hle'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='rtm'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Skylake-Server-v5'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512bw'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512cd'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512dq'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512f'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='avx512vl'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='invpcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pcid'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='pku'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Snowridge'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='core-capability'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mpx'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='split-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Snowridge-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='core-capability'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='mpx'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='split-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Snowridge-v2'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='core-capability'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='split-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Snowridge-v3'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='core-capability'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='split-lock-detect'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='Snowridge-v4'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='cldemote'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='erms'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='gfni'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdir64b'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='movdiri'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='xsaves'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='athlon'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnow'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnowext'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='athlon-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnow'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnowext'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='core2duo'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='core2duo-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='coreduo'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='coreduo-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='n270'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='n270-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='ss'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='phenom'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnow'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnowext'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <blockers model='phenom-v1'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnow'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <feature name='3dnowext'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </blockers>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </mode>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   </cpu>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <memoryBacking supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <enum name='sourceType'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <value>file</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <value>anonymous</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <value>memfd</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   </memoryBacking>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <devices>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <disk supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='diskDevice'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>disk</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>cdrom</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>floppy</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>lun</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='bus'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>ide</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>fdc</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>scsi</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>usb</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>sata</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='model'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio-transitional</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio-non-transitional</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </disk>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <graphics supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='type'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>vnc</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>egl-headless</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>dbus</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </graphics>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <video supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='modelType'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>vga</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>cirrus</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>none</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>bochs</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>ramfb</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </video>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <hostdev supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='mode'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>subsystem</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='startupPolicy'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>default</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>mandatory</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>requisite</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>optional</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='subsysType'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>usb</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>pci</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>scsi</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='capsType'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='pciBackend'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </hostdev>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <rng supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='model'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio-transitional</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtio-non-transitional</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='backendModel'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>random</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>egd</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>builtin</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </rng>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <filesystem supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='driverType'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>path</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>handle</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>virtiofs</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </filesystem>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <tpm supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='model'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>tpm-tis</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>tpm-crb</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='backendModel'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>emulator</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>external</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='backendVersion'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>2.0</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </tpm>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <redirdev supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='bus'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>usb</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </redirdev>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <channel supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='type'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>pty</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>unix</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </channel>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <crypto supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='model'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='type'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>qemu</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='backendModel'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>builtin</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </crypto>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <interface supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='backendType'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>default</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>passt</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </interface>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <panic supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='model'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>isa</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>hyperv</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </panic>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   </devices>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   <features>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <gic supported='no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <vmcoreinfo supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <genid supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <backingStoreInput supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <backup supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <async-teardown supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <ps2 supported='yes'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <sev supported='no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <sgx supported='no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <hyperv supported='yes'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       <enum name='features'>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>relaxed</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>vapic</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>spinlocks</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>vpindex</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>runtime</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>synic</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>stimer</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>reset</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>vendor_id</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>frequencies</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>reenlightenment</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>tlbflush</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>ipi</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>avic</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>emsr_bitmap</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:         <value>xmm_input</value>
Oct 11 04:46:07 compute-0 nova_compute[258482]:       </enum>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     </hyperv>
Oct 11 04:46:07 compute-0 nova_compute[258482]:     <launchSecurity supported='no'/>
Oct 11 04:46:07 compute-0 nova_compute[258482]:   </features>
Oct 11 04:46:07 compute-0 nova_compute[258482]: </domainCapabilities>
Oct 11 04:46:07 compute-0 nova_compute[258482]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 11 04:46:07 compute-0 nova_compute[258482]: 2025-10-11 04:46:07.197 2 DEBUG oslo_concurrency.lockutils [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:46:07 compute-0 nova_compute[258482]: 2025-10-11 04:46:07.202 2 DEBUG oslo_concurrency.lockutils [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:46:07 compute-0 nova_compute[258482]: 2025-10-11 04:46:07.202 2 DEBUG oslo_concurrency.lockutils [None req-c0f3b9d7-cff8-4d4d-b6cb-89a62412e59d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:46:07 compute-0 virtqemud[259122]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 11 04:46:07 compute-0 virtqemud[259122]: hostname: compute-0
Oct 11 04:46:07 compute-0 virtqemud[259122]: End of file while reading data: Input/output error
Oct 11 04:46:07 compute-0 systemd[1]: libpod-6aee11c0fb7a0814bfc733ee23f52cb5b809b26bfefaaad4d38eb786adf49195.scope: Deactivated successfully.
Oct 11 04:46:07 compute-0 systemd[1]: libpod-6aee11c0fb7a0814bfc733ee23f52cb5b809b26bfefaaad4d38eb786adf49195.scope: Consumed 2.920s CPU time.
Oct 11 04:46:07 compute-0 podman[259339]: 2025-10-11 04:46:07.570899092 +0000 UTC m=+0.466514169 container died 6aee11c0fb7a0814bfc733ee23f52cb5b809b26bfefaaad4d38eb786adf49195 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:46:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6aee11c0fb7a0814bfc733ee23f52cb5b809b26bfefaaad4d38eb786adf49195-userdata-shm.mount: Deactivated successfully.
Oct 11 04:46:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-48ddaeaf7fe4ec8e47b61b47a0a38d9fe24032f087eba5abd94e2e059a6569d4-merged.mount: Deactivated successfully.
Oct 11 04:46:07 compute-0 ceph-mon[74243]: pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:08 compute-0 podman[259339]: 2025-10-11 04:46:08.716952337 +0000 UTC m=+1.612567444 container cleanup 6aee11c0fb7a0814bfc733ee23f52cb5b809b26bfefaaad4d38eb786adf49195 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:46:08 compute-0 podman[259339]: nova_compute
Oct 11 04:46:08 compute-0 ceph-mon[74243]: pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:08 compute-0 podman[259371]: nova_compute
Oct 11 04:46:08 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 11 04:46:08 compute-0 systemd[1]: Stopped nova_compute container.
Oct 11 04:46:08 compute-0 systemd[1]: Starting nova_compute container...
Oct 11 04:46:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:46:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ddaeaf7fe4ec8e47b61b47a0a38d9fe24032f087eba5abd94e2e059a6569d4/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ddaeaf7fe4ec8e47b61b47a0a38d9fe24032f087eba5abd94e2e059a6569d4/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ddaeaf7fe4ec8e47b61b47a0a38d9fe24032f087eba5abd94e2e059a6569d4/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ddaeaf7fe4ec8e47b61b47a0a38d9fe24032f087eba5abd94e2e059a6569d4/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ddaeaf7fe4ec8e47b61b47a0a38d9fe24032f087eba5abd94e2e059a6569d4/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:08 compute-0 podman[259384]: 2025-10-11 04:46:08.957630874 +0000 UTC m=+0.112094731 container init 6aee11c0fb7a0814bfc733ee23f52cb5b809b26bfefaaad4d38eb786adf49195 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=nova_compute)
Oct 11 04:46:08 compute-0 podman[259384]: 2025-10-11 04:46:08.977070267 +0000 UTC m=+0.131534084 container start 6aee11c0fb7a0814bfc733ee23f52cb5b809b26bfefaaad4d38eb786adf49195 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:46:08 compute-0 podman[259384]: nova_compute
Oct 11 04:46:08 compute-0 nova_compute[259400]: + sudo -E kolla_set_configs
Oct 11 04:46:08 compute-0 systemd[1]: Started nova_compute container.
Oct 11 04:46:09 compute-0 sudo[259332]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Validating config file
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Copying service configuration files
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Deleting /etc/ceph
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Creating directory /etc/ceph
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Setting permission for /etc/ceph
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Writing out command to execute
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 11 04:46:09 compute-0 nova_compute[259400]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 11 04:46:09 compute-0 nova_compute[259400]: ++ cat /run_command
Oct 11 04:46:09 compute-0 nova_compute[259400]: + CMD=nova-compute
Oct 11 04:46:09 compute-0 nova_compute[259400]: + ARGS=
Oct 11 04:46:09 compute-0 nova_compute[259400]: + sudo kolla_copy_cacerts
Oct 11 04:46:09 compute-0 nova_compute[259400]: + [[ ! -n '' ]]
Oct 11 04:46:09 compute-0 nova_compute[259400]: + . kolla_extend_start
Oct 11 04:46:09 compute-0 nova_compute[259400]: + echo 'Running command: '\''nova-compute'\'''
Oct 11 04:46:09 compute-0 nova_compute[259400]: Running command: 'nova-compute'
Oct 11 04:46:09 compute-0 nova_compute[259400]: + umask 0022
Oct 11 04:46:09 compute-0 nova_compute[259400]: + exec nova-compute
Oct 11 04:46:09 compute-0 sudo[259561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmskfndwzrhrpkajyqqpnlsisakzfkhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760157969.285794-1858-199632947113411/AnsiballZ_podman_container.py'
Oct 11 04:46:09 compute-0 sudo[259561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:46:09 compute-0 python3.9[259563]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 11 04:46:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:46:10 compute-0 systemd[1]: Started libpod-conmon-bcdbe5eae2ca7782143f66ad3decc6d019b5a90e321aa46ed7cdf097adac1474.scope.
Oct 11 04:46:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:46:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542555a015e6ac93ca7ea33d6bfcbd0e70e9270e760f0b1973d71ce481ef7658/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542555a015e6ac93ca7ea33d6bfcbd0e70e9270e760f0b1973d71ce481ef7658/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542555a015e6ac93ca7ea33d6bfcbd0e70e9270e760f0b1973d71ce481ef7658/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:10 compute-0 podman[259589]: 2025-10-11 04:46:10.208656728 +0000 UTC m=+0.157760857 container init bcdbe5eae2ca7782143f66ad3decc6d019b5a90e321aa46ed7cdf097adac1474 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, container_name=nova_compute_init, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:46:10 compute-0 podman[259589]: 2025-10-11 04:46:10.217316248 +0000 UTC m=+0.166420367 container start bcdbe5eae2ca7782143f66ad3decc6d019b5a90e321aa46ed7cdf097adac1474 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:46:10 compute-0 python3.9[259563]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 11 04:46:10 compute-0 nova_compute_init[259609]: INFO:nova_statedir:Applying nova statedir ownership
Oct 11 04:46:10 compute-0 nova_compute_init[259609]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 11 04:46:10 compute-0 nova_compute_init[259609]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 11 04:46:10 compute-0 nova_compute_init[259609]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 11 04:46:10 compute-0 nova_compute_init[259609]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 11 04:46:10 compute-0 nova_compute_init[259609]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 11 04:46:10 compute-0 nova_compute_init[259609]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 11 04:46:10 compute-0 nova_compute_init[259609]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 11 04:46:10 compute-0 nova_compute_init[259609]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 11 04:46:10 compute-0 nova_compute_init[259609]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 11 04:46:10 compute-0 nova_compute_init[259609]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 11 04:46:10 compute-0 nova_compute_init[259609]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 11 04:46:10 compute-0 nova_compute_init[259609]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 11 04:46:10 compute-0 nova_compute_init[259609]: INFO:nova_statedir:Nova statedir ownership complete
Oct 11 04:46:10 compute-0 systemd[1]: libpod-bcdbe5eae2ca7782143f66ad3decc6d019b5a90e321aa46ed7cdf097adac1474.scope: Deactivated successfully.
Oct 11 04:46:10 compute-0 podman[259610]: 2025-10-11 04:46:10.291251801 +0000 UTC m=+0.039489362 container died bcdbe5eae2ca7782143f66ad3decc6d019b5a90e321aa46ed7cdf097adac1474 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute_init, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 04:46:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bcdbe5eae2ca7782143f66ad3decc6d019b5a90e321aa46ed7cdf097adac1474-userdata-shm.mount: Deactivated successfully.
Oct 11 04:46:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-542555a015e6ac93ca7ea33d6bfcbd0e70e9270e760f0b1973d71ce481ef7658-merged.mount: Deactivated successfully.
Oct 11 04:46:10 compute-0 podman[259620]: 2025-10-11 04:46:10.363169593 +0000 UTC m=+0.065015188 container cleanup bcdbe5eae2ca7782143f66ad3decc6d019b5a90e321aa46ed7cdf097adac1474 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 11 04:46:10 compute-0 systemd[1]: libpod-conmon-bcdbe5eae2ca7782143f66ad3decc6d019b5a90e321aa46ed7cdf097adac1474.scope: Deactivated successfully.
Oct 11 04:46:10 compute-0 sudo[259561]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:10 compute-0 sshd-session[222194]: Connection closed by 192.168.122.30 port 41074
Oct 11 04:46:10 compute-0 sshd-session[222191]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:46:10 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Oct 11 04:46:10 compute-0 systemd[1]: session-51.scope: Consumed 3min 2.500s CPU time.
Oct 11 04:46:10 compute-0 systemd-logind[801]: Session 51 logged out. Waiting for processes to exit.
Oct 11 04:46:10 compute-0 systemd-logind[801]: Removed session 51.
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.008 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 11 04:46:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:46:11.009 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.009 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 11 04:46:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:46:11.009 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.010 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 11 04:46:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:46:11.010 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.010 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 11 04:46:11 compute-0 podman[259676]: 2025-10-11 04:46:11.094029768 +0000 UTC m=+0.106015777 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 11 04:46:11 compute-0 podman[259675]: 2025-10-11 04:46:11.135400516 +0000 UTC m=+0.142745637 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.171 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.186 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:46:11 compute-0 sudo[259723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:46:11 compute-0 sudo[259723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:46:11 compute-0 sudo[259723]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:11 compute-0 sudo[259748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:46:11 compute-0 sudo[259748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:46:11 compute-0 sudo[259748]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:11 compute-0 ceph-mon[74243]: pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:11 compute-0 podman[259772]: 2025-10-11 04:46:11.619215383 +0000 UTC m=+0.059918819 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 04:46:11 compute-0 sudo[259779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:46:11 compute-0 sudo[259779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:46:11 compute-0 sudo[259779]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:11 compute-0 sudo[259819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:46:11 compute-0 sudo[259819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.765 2 INFO nova.virt.driver [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.888 2 INFO nova.compute.provider_config [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.909 2 DEBUG oslo_concurrency.lockutils [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.909 2 DEBUG oslo_concurrency.lockutils [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.910 2 DEBUG oslo_concurrency.lockutils [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.910 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.910 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.911 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.911 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.911 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.911 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.912 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.912 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.912 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.912 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.912 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.912 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.913 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.913 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.913 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.913 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.913 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.914 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.914 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.914 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.914 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.914 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.915 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.915 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.915 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.915 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.915 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.915 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.916 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.916 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.916 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.916 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.916 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.916 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.917 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.917 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.917 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.917 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.917 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.917 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.918 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.918 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.918 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.918 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.918 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.918 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.919 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.919 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.919 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.919 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.919 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.919 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.919 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.920 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.920 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.920 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.920 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.920 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.920 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.920 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.920 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.921 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.921 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.921 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.921 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.921 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.921 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.921 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.922 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.922 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.922 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.922 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.922 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.922 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.922 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.923 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.923 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.923 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.923 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.923 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.923 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.923 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.924 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.924 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.924 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.924 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.924 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.924 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.925 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.925 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.925 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.925 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.925 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.925 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.925 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.926 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.926 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.926 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.926 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.926 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.926 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.926 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.927 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.927 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.927 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.927 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.927 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.927 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.928 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.928 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.928 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.928 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.928 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.928 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.929 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.929 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.929 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.929 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.929 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.930 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.930 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.930 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.930 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.930 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.930 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.930 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.931 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.931 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.931 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.931 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.931 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.931 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.931 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.932 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.932 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.932 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.932 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.932 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.932 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.932 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.933 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.933 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.933 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.933 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.933 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.933 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.934 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.934 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.934 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.934 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.934 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.934 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.935 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.935 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.935 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.935 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.935 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.935 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.936 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.936 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.936 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.936 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.936 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.936 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.937 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.937 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.937 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.937 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.937 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.938 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.938 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.938 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.938 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.938 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.938 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.939 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.939 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.939 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.939 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.939 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.939 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.940 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.940 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.940 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.940 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.940 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.940 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.940 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.941 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.941 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.941 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.941 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.941 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.941 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.941 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.942 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.942 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.942 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.942 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.942 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.942 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.942 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.943 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.943 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.943 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.943 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.943 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.943 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.943 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.943 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.944 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.944 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.944 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.944 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.944 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.944 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.944 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.945 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.945 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.945 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.945 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.945 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.945 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.945 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.946 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.946 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.946 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.946 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.946 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.946 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.947 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.947 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.947 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.947 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.947 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.947 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.947 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.948 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.948 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.948 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.948 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.948 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.949 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.949 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.949 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.949 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.949 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.949 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.950 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.950 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.950 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.950 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.950 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.950 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.951 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.951 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.951 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.951 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.951 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.951 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.951 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.952 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.952 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.952 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.952 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.952 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.953 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.953 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.953 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.953 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.953 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.953 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.953 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.953 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.954 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.954 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.954 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.954 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.954 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.954 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.955 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.955 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.955 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.955 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.955 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.955 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.955 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.955 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.956 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.956 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.956 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.956 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.956 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.956 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.957 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.957 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.957 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.957 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.957 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.957 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.958 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.958 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.958 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.958 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.958 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.958 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.959 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.959 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.959 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.959 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.959 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.959 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.959 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.960 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.960 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.960 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.960 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.960 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.960 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.960 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.961 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.961 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.961 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.961 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.961 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.961 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.961 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.962 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.962 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.962 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.962 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.962 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.962 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.962 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.963 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.963 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.963 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.963 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.963 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.963 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.963 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.964 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.964 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.964 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.964 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.964 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.965 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.965 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.965 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.965 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.965 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.965 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.965 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.966 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.966 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.966 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.966 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.966 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.967 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.967 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.967 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.967 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.967 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.967 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.967 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.968 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.968 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.968 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.968 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.968 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.968 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.968 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.969 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.969 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.969 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.969 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.969 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.969 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.969 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.969 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.970 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.970 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.970 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.970 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.970 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.970 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.970 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.971 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.971 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.971 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.971 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.971 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.971 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.971 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.972 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.972 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.972 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.972 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.972 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.972 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.972 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.973 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.973 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.973 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.973 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.973 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.973 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.973 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.974 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.974 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.974 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.974 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.974 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.974 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.974 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.975 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.975 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.975 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.975 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.975 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.975 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.975 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.976 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.976 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.976 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.976 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.976 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.976 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.976 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.977 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.977 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.977 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.977 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.977 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.977 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.977 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.978 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.978 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.978 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.978 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.978 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.978 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.978 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.978 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.979 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.979 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.979 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.979 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.979 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.979 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.979 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.980 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.980 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.980 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.980 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.980 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.980 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.980 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.981 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.981 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.981 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.981 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.981 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.981 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.982 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.982 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.982 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.982 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.982 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.982 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.982 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.983 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.983 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.983 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.983 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.983 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.983 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.983 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.984 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.984 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.984 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.984 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.984 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.984 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.984 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.985 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.985 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.985 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.985 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.985 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.985 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.985 2 WARNING oslo_config.cfg [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 11 04:46:11 compute-0 nova_compute[259400]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 11 04:46:11 compute-0 nova_compute[259400]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 11 04:46:11 compute-0 nova_compute[259400]: and ``live_migration_inbound_addr`` respectively.
Oct 11 04:46:11 compute-0 nova_compute[259400]: ).  Its value may be silently ignored in the future.
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.986 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.986 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.986 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.986 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.986 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.987 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.987 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.987 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.987 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.987 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.987 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.987 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.988 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.988 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.988 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.988 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.988 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.988 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.988 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.rbd_secret_uuid        = 166d0489-2ae7-59eb-961c-c1b5cda4b45a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.989 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.989 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.989 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.989 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.989 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.989 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.989 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.989 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.990 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.990 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.990 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.990 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.990 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.990 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.991 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.991 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.991 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.991 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.991 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.991 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.991 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.992 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.992 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.992 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.992 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.992 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.992 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.992 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.993 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.993 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.993 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.993 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.993 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.993 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.994 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.994 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.994 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.994 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.994 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.994 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.995 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.995 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.995 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.995 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.995 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.996 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.996 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.996 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.996 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.996 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.996 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.997 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.997 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.997 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.997 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.997 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.997 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.997 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.997 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.998 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.998 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.998 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.998 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.998 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.998 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.999 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.999 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.999 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.999 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:11 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.999 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:11.999 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.000 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.000 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.000 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.000 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.000 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.000 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.001 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.001 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.001 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.001 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.001 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.001 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.002 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.002 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.002 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.002 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.002 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.002 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.002 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.003 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.003 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.003 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.003 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.003 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.003 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.003 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.004 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.004 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.004 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.004 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.004 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.004 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.004 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.005 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.005 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.005 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.005 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.005 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.005 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.005 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.006 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.006 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.006 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.006 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.006 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.006 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.007 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.007 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.007 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.007 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.007 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.007 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.008 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.008 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.008 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.008 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.008 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.008 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.008 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.009 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.009 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.009 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.009 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.009 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.009 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.009 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.010 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.010 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.010 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.010 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.010 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.010 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.010 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.011 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.011 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.011 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.011 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.011 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.011 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.011 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.012 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.012 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.012 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.012 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.012 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.012 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.012 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.013 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.013 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.013 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.013 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.013 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.013 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.013 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.014 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.014 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.014 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.014 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.014 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.014 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.015 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.015 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.015 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.015 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.015 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.015 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.015 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.016 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.016 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.016 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.016 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.016 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.016 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.016 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.017 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.017 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.017 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.017 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.017 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.017 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.017 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.018 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.018 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.018 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.018 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.018 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.019 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.019 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.019 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.019 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.019 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.019 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.020 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.020 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.020 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.020 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.020 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.020 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.020 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.021 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.021 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.021 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.021 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.021 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.021 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.021 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.022 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.022 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.022 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.022 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.022 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.022 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.022 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.022 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.023 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.023 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.023 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.023 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.023 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.023 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.024 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.024 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.024 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.024 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.024 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.024 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.024 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.025 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.025 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.025 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.025 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.025 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.025 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.025 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.025 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.026 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.026 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.026 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.026 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.026 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.026 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.026 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.027 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.027 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.027 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.027 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.027 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.027 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.027 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.028 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.028 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.028 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.028 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.028 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.028 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.028 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.028 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.029 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.029 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.029 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.029 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.029 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.029 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.029 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.030 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.030 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.030 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.030 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.030 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.030 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.030 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.031 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.031 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.031 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.031 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.031 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.031 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.031 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.032 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.032 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.032 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.032 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.032 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.032 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.032 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.032 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.033 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.033 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.033 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.033 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.033 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.033 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.033 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.034 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.034 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.034 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.034 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.034 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.034 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.034 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.034 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.035 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.035 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.035 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.035 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.035 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.035 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.035 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.036 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.036 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.036 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.036 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.036 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.036 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.036 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.037 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.037 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.037 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.037 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.037 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.037 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.037 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.037 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.038 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.038 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.038 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.038 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.038 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.038 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.038 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.039 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.039 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.039 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.039 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.039 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.039 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.039 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.039 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.040 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.040 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.040 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.040 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.040 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.040 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.040 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.040 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.041 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.041 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.041 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.041 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.041 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.041 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.041 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.042 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.042 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.042 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.042 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.042 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.042 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.042 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.043 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.043 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.043 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.043 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.043 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.043 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.043 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.044 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.044 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.044 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.044 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.044 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.044 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.044 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.044 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.045 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.045 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.045 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.045 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.045 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.045 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.045 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.046 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.046 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.046 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.046 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.046 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.046 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.046 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.046 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.047 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.047 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.047 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.047 2 DEBUG oslo_service.service [None req-dccca043-4005-4170-b983-a197eb8d1dc3 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.048 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.061 2 DEBUG nova.virt.libvirt.host [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.062 2 DEBUG nova.virt.libvirt.host [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.062 2 DEBUG nova.virt.libvirt.host [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.062 2 DEBUG nova.virt.libvirt.host [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.074 2 DEBUG nova.virt.libvirt.host [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f5d6345e0a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.077 2 DEBUG nova.virt.libvirt.host [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f5d6345e0a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.078 2 INFO nova.virt.libvirt.driver [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Connection event '1' reason 'None'
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.085 2 INFO nova.virt.libvirt.host [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Libvirt host capabilities <capabilities>
Oct 11 04:46:12 compute-0 nova_compute[259400]: 
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <host>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <uuid>53cb9e9d-2668-4473-9499-ec86a0f02be2</uuid>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <cpu>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <arch>x86_64</arch>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model>EPYC-Rome-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <vendor>AMD</vendor>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <microcode version='16777317'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <signature family='23' model='49' stepping='0'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='x2apic'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='tsc-deadline'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='osxsave'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='hypervisor'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='tsc_adjust'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='spec-ctrl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='stibp'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='arch-capabilities'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='ssbd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='cmp_legacy'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='topoext'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='virt-ssbd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='lbrv'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='tsc-scale'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='vmcb-clean'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='pause-filter'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='pfthreshold'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='svme-addr-chk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='rdctl-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='skip-l1dfl-vmentry'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='mds-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature name='pschange-mc-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <pages unit='KiB' size='4'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <pages unit='KiB' size='2048'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <pages unit='KiB' size='1048576'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </cpu>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <power_management>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <suspend_mem/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </power_management>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <iommu support='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <migration_features>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <live/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <uri_transports>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <uri_transport>tcp</uri_transport>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <uri_transport>rdma</uri_transport>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </uri_transports>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </migration_features>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <topology>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <cells num='1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <cell id='0'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:           <memory unit='KiB'>7864344</memory>
Oct 11 04:46:12 compute-0 nova_compute[259400]:           <pages unit='KiB' size='4'>1966086</pages>
Oct 11 04:46:12 compute-0 nova_compute[259400]:           <pages unit='KiB' size='2048'>0</pages>
Oct 11 04:46:12 compute-0 nova_compute[259400]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 11 04:46:12 compute-0 nova_compute[259400]:           <distances>
Oct 11 04:46:12 compute-0 nova_compute[259400]:             <sibling id='0' value='10'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:           </distances>
Oct 11 04:46:12 compute-0 nova_compute[259400]:           <cpus num='8'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:           </cpus>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         </cell>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </cells>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </topology>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <cache>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </cache>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <secmodel>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model>selinux</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <doi>0</doi>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </secmodel>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <secmodel>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model>dac</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <doi>0</doi>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </secmodel>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </host>
Oct 11 04:46:12 compute-0 nova_compute[259400]: 
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <guest>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <os_type>hvm</os_type>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <arch name='i686'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <wordsize>32</wordsize>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <domain type='qemu'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <domain type='kvm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </arch>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <features>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <pae/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <nonpae/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <acpi default='on' toggle='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <apic default='on' toggle='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <cpuselection/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <deviceboot/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <disksnapshot default='on' toggle='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <externalSnapshot/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </features>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </guest>
Oct 11 04:46:12 compute-0 nova_compute[259400]: 
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <guest>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <os_type>hvm</os_type>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <arch name='x86_64'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <wordsize>64</wordsize>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <domain type='qemu'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <domain type='kvm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </arch>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <features>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <acpi default='on' toggle='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <apic default='on' toggle='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <cpuselection/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <deviceboot/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <disksnapshot default='on' toggle='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <externalSnapshot/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </features>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </guest>
Oct 11 04:46:12 compute-0 nova_compute[259400]: 
Oct 11 04:46:12 compute-0 nova_compute[259400]: </capabilities>
Oct 11 04:46:12 compute-0 nova_compute[259400]: 
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.088 2 WARNING nova.virt.libvirt.driver [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.088 2 DEBUG nova.virt.libvirt.volume.mount [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.090 2 DEBUG nova.virt.libvirt.host [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.093 2 DEBUG nova.virt.libvirt.host [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 11 04:46:12 compute-0 nova_compute[259400]: <domainCapabilities>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <path>/usr/libexec/qemu-kvm</path>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <domain>kvm</domain>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <arch>i686</arch>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <vcpu max='240'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <iothreads supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <os supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <enum name='firmware'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <loader supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='type'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>rom</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>pflash</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='readonly'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>yes</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>no</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='secure'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>no</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </loader>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </os>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <cpu>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <mode name='host-passthrough' supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='hostPassthroughMigratable'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>on</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>off</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </mode>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <mode name='maximum' supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='maximumMigratable'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>on</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>off</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </mode>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <mode name='host-model' supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <vendor>AMD</vendor>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='x2apic'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='tsc-deadline'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='hypervisor'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='tsc_adjust'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='spec-ctrl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='stibp'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='arch-capabilities'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='ssbd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='cmp_legacy'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='overflow-recov'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='succor'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='ibrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='amd-ssbd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='virt-ssbd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='lbrv'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='tsc-scale'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='vmcb-clean'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='flushbyasid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='pause-filter'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='pfthreshold'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='svme-addr-chk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='rdctl-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='mds-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='gds-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='rfds-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='disable' name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </mode>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <mode name='custom' supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-noTSX'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-v5'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cooperlake'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cooperlake-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cooperlake-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Denverton'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mpx'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Denverton-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mpx'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Denverton-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Denverton-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Dhyana-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Genoa'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amd-psfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='auto-ibrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='stibp-always-on'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Genoa-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amd-psfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='auto-ibrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='stibp-always-on'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Milan'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Milan-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Milan-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amd-psfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='stibp-always-on'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Rome'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Rome-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Rome-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Rome-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='GraniteRapids'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mcdt-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pbrsb-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='prefetchiti'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='GraniteRapids-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mcdt-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pbrsb-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='prefetchiti'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='GraniteRapids-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx10'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx10-128'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx10-256'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx10-512'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mcdt-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pbrsb-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='prefetchiti'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-noTSX'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-noTSX'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v5'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v6'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v7'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='IvyBridge'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='IvyBridge-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='IvyBridge-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='IvyBridge-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='KnightsMill'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-4fmaps'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-4vnniw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512er'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512pf'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='KnightsMill-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-4fmaps'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-4vnniw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512er'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512pf'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Opteron_G4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fma4'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xop'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Opteron_G4-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fma4'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xop'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Opteron_G5'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fma4'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tbm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xop'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Opteron_G5-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fma4'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tbm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xop'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SapphireRapids'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SapphireRapids-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SapphireRapids-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SapphireRapids-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SierraForest'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-ne-convert'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cmpccxadd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mcdt-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pbrsb-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SierraForest-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-ne-convert'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cmpccxadd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mcdt-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pbrsb-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-v5'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Snowridge'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='core-capability'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mpx'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='split-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Snowridge-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='core-capability'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mpx'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='split-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Snowridge-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='core-capability'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='split-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Snowridge-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='core-capability'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='split-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Snowridge-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='athlon'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnow'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnowext'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='athlon-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnow'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnowext'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='core2duo'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='core2duo-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='coreduo'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='coreduo-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='n270'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='n270-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='phenom'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnow'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnowext'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='phenom-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnow'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnowext'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </mode>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </cpu>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <memoryBacking supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <enum name='sourceType'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <value>file</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <value>anonymous</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <value>memfd</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </memoryBacking>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <devices>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <disk supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='diskDevice'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>disk</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>cdrom</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>floppy</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>lun</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='bus'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>ide</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>fdc</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>scsi</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>usb</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>sata</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='model'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio-transitional</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio-non-transitional</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </disk>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <graphics supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='type'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>vnc</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>egl-headless</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>dbus</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </graphics>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <video supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='modelType'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>vga</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>cirrus</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>none</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>bochs</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>ramfb</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </video>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <hostdev supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='mode'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>subsystem</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='startupPolicy'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>default</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>mandatory</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>requisite</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>optional</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='subsysType'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>usb</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>pci</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>scsi</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='capsType'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='pciBackend'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </hostdev>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <rng supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='model'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio-transitional</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio-non-transitional</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='backendModel'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>random</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>egd</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>builtin</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </rng>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <filesystem supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='driverType'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>path</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>handle</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtiofs</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </filesystem>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <tpm supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='model'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>tpm-tis</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>tpm-crb</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='backendModel'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>emulator</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>external</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='backendVersion'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>2.0</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </tpm>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <redirdev supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='bus'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>usb</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </redirdev>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <channel supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='type'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>pty</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>unix</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </channel>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <crypto supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='model'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='type'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>qemu</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='backendModel'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>builtin</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </crypto>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <interface supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='backendType'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>default</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>passt</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </interface>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <panic supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='model'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>isa</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>hyperv</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </panic>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </devices>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <features>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <gic supported='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <vmcoreinfo supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <genid supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <backingStoreInput supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <backup supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <async-teardown supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <ps2 supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <sev supported='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <sgx supported='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <hyperv supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='features'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>relaxed</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>vapic</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>spinlocks</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>vpindex</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>runtime</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>synic</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>stimer</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>reset</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>vendor_id</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>frequencies</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>reenlightenment</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>tlbflush</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>ipi</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>avic</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>emsr_bitmap</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>xmm_input</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </hyperv>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <launchSecurity supported='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </features>
Oct 11 04:46:12 compute-0 nova_compute[259400]: </domainCapabilities>
Oct 11 04:46:12 compute-0 nova_compute[259400]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.098 2 DEBUG nova.virt.libvirt.host [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 11 04:46:12 compute-0 nova_compute[259400]: <domainCapabilities>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <path>/usr/libexec/qemu-kvm</path>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <domain>kvm</domain>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <arch>i686</arch>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <vcpu max='4096'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <iothreads supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <os supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <enum name='firmware'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <loader supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='type'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>rom</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>pflash</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='readonly'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>yes</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>no</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='secure'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>no</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </loader>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </os>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <cpu>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <mode name='host-passthrough' supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='hostPassthroughMigratable'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>on</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>off</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </mode>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <mode name='maximum' supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='maximumMigratable'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>on</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>off</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </mode>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <mode name='host-model' supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <vendor>AMD</vendor>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='x2apic'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='tsc-deadline'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='hypervisor'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='tsc_adjust'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='spec-ctrl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='stibp'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='arch-capabilities'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='ssbd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='cmp_legacy'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='overflow-recov'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='succor'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='ibrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='amd-ssbd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='virt-ssbd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='lbrv'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='tsc-scale'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='vmcb-clean'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='flushbyasid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='pause-filter'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='pfthreshold'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='svme-addr-chk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='rdctl-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='mds-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='gds-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='rfds-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='disable' name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </mode>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <mode name='custom' supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-noTSX'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-v5'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cooperlake'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cooperlake-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cooperlake-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Denverton'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mpx'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Denverton-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mpx'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Denverton-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Denverton-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Dhyana-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Genoa'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amd-psfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='auto-ibrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='stibp-always-on'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Genoa-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amd-psfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='auto-ibrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='stibp-always-on'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Milan'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Milan-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Milan-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amd-psfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='stibp-always-on'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Rome'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Rome-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Rome-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Rome-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='GraniteRapids'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mcdt-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pbrsb-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='prefetchiti'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='GraniteRapids-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mcdt-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pbrsb-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='prefetchiti'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='GraniteRapids-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx10'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx10-128'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx10-256'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx10-512'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mcdt-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pbrsb-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='prefetchiti'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-noTSX'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-noTSX'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v5'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v6'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v7'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='IvyBridge'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='IvyBridge-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='IvyBridge-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='IvyBridge-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='KnightsMill'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-4fmaps'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-4vnniw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512er'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512pf'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='KnightsMill-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-4fmaps'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-4vnniw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512er'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512pf'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Opteron_G4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fma4'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xop'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Opteron_G4-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fma4'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xop'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Opteron_G5'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fma4'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tbm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xop'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Opteron_G5-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fma4'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tbm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xop'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SapphireRapids'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SapphireRapids-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SapphireRapids-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SapphireRapids-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SierraForest'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-ne-convert'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cmpccxadd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mcdt-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pbrsb-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SierraForest-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-ne-convert'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cmpccxadd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mcdt-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pbrsb-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-v5'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Snowridge'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='core-capability'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mpx'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='split-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Snowridge-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='core-capability'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mpx'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='split-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Snowridge-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='core-capability'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='split-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Snowridge-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='core-capability'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='split-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Snowridge-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='athlon'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnow'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnowext'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='athlon-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnow'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnowext'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='core2duo'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='core2duo-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='coreduo'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='coreduo-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='n270'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='n270-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='phenom'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnow'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnowext'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='phenom-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnow'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnowext'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </mode>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </cpu>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <memoryBacking supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <enum name='sourceType'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <value>file</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <value>anonymous</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <value>memfd</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </memoryBacking>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <devices>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <disk supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='diskDevice'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>disk</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>cdrom</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>floppy</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>lun</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='bus'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>fdc</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>scsi</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>usb</value>
Oct 11 04:46:12 compute-0 sudo[259819]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>sata</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='model'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio-transitional</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio-non-transitional</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </disk>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <graphics supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='type'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>vnc</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>egl-headless</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>dbus</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </graphics>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <video supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='modelType'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>vga</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>cirrus</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>none</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>bochs</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>ramfb</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </video>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <hostdev supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='mode'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>subsystem</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='startupPolicy'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>default</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>mandatory</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>requisite</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>optional</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='subsysType'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>usb</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>pci</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>scsi</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='capsType'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='pciBackend'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </hostdev>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <rng supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='model'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio-transitional</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio-non-transitional</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='backendModel'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>random</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>egd</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>builtin</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </rng>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <filesystem supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='driverType'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>path</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>handle</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtiofs</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </filesystem>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <tpm supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='model'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>tpm-tis</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>tpm-crb</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='backendModel'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>emulator</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>external</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='backendVersion'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>2.0</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </tpm>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <redirdev supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='bus'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>usb</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </redirdev>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <channel supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='type'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>pty</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>unix</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </channel>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <crypto supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='model'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='type'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>qemu</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='backendModel'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>builtin</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </crypto>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <interface supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='backendType'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>default</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>passt</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </interface>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <panic supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='model'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>isa</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>hyperv</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </panic>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </devices>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <features>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <gic supported='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <vmcoreinfo supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <genid supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <backingStoreInput supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <backup supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <async-teardown supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <ps2 supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <sev supported='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <sgx supported='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <hyperv supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='features'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>relaxed</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>vapic</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>spinlocks</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>vpindex</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>runtime</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>synic</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>stimer</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>reset</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>vendor_id</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>frequencies</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>reenlightenment</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>tlbflush</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>ipi</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>avic</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>emsr_bitmap</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>xmm_input</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </hyperv>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <launchSecurity supported='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </features>
Oct 11 04:46:12 compute-0 nova_compute[259400]: </domainCapabilities>
Oct 11 04:46:12 compute-0 nova_compute[259400]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.136 2 DEBUG nova.virt.libvirt.host [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.139 2 DEBUG nova.virt.libvirt.host [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 11 04:46:12 compute-0 nova_compute[259400]: <domainCapabilities>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <path>/usr/libexec/qemu-kvm</path>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <domain>kvm</domain>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <arch>x86_64</arch>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <vcpu max='240'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <iothreads supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <os supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <enum name='firmware'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <loader supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='type'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>rom</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>pflash</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='readonly'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>yes</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>no</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='secure'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>no</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </loader>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </os>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <cpu>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <mode name='host-passthrough' supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='hostPassthroughMigratable'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>on</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>off</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </mode>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <mode name='maximum' supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='maximumMigratable'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>on</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>off</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </mode>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <mode name='host-model' supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <vendor>AMD</vendor>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='x2apic'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='tsc-deadline'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='hypervisor'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='tsc_adjust'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='spec-ctrl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='stibp'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='arch-capabilities'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='ssbd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='cmp_legacy'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='overflow-recov'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='succor'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='ibrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='amd-ssbd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='virt-ssbd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='lbrv'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='tsc-scale'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='vmcb-clean'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='flushbyasid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='pause-filter'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='pfthreshold'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='svme-addr-chk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='rdctl-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='mds-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='gds-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='rfds-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='disable' name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </mode>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <mode name='custom' supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-noTSX'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-v5'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cooperlake'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cooperlake-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cooperlake-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Denverton'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mpx'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Denverton-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mpx'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Denverton-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Denverton-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Dhyana-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Genoa'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amd-psfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='auto-ibrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='stibp-always-on'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Genoa-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amd-psfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='auto-ibrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='stibp-always-on'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Milan'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Milan-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Milan-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amd-psfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='stibp-always-on'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Rome'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Rome-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Rome-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Rome-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='GraniteRapids'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mcdt-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pbrsb-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='prefetchiti'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='GraniteRapids-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mcdt-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pbrsb-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='prefetchiti'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='GraniteRapids-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx10'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx10-128'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx10-256'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx10-512'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mcdt-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pbrsb-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='prefetchiti'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-noTSX'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-noTSX'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v5'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v6'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v7'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='IvyBridge'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='IvyBridge-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='IvyBridge-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='IvyBridge-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='KnightsMill'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-4fmaps'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-4vnniw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512er'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512pf'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='KnightsMill-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-4fmaps'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-4vnniw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512er'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512pf'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Opteron_G4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fma4'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xop'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Opteron_G4-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fma4'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xop'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Opteron_G5'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fma4'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tbm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xop'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Opteron_G5-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fma4'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tbm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xop'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SapphireRapids'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SapphireRapids-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SapphireRapids-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:46:12 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SapphireRapids-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SierraForest'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-ne-convert'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cmpccxadd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mcdt-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pbrsb-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SierraForest-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-ne-convert'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cmpccxadd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mcdt-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pbrsb-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-v5'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Snowridge'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='core-capability'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mpx'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='split-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Snowridge-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='core-capability'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mpx'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='split-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Snowridge-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='core-capability'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='split-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Snowridge-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='core-capability'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='split-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Snowridge-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='athlon'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnow'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnowext'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='athlon-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnow'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnowext'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='core2duo'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='core2duo-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='coreduo'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='coreduo-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='n270'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='n270-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='phenom'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnow'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnowext'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='phenom-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnow'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnowext'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </mode>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </cpu>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <memoryBacking supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <enum name='sourceType'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <value>file</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <value>anonymous</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <value>memfd</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </memoryBacking>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <devices>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <disk supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='diskDevice'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>disk</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>cdrom</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>floppy</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>lun</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='bus'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>ide</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>fdc</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>scsi</value>
Oct 11 04:46:12 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>usb</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>sata</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='model'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio-transitional</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio-non-transitional</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </disk>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <graphics supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='type'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>vnc</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>egl-headless</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>dbus</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </graphics>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <video supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='modelType'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>vga</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>cirrus</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>none</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>bochs</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>ramfb</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </video>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <hostdev supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='mode'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>subsystem</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='startupPolicy'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>default</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>mandatory</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>requisite</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>optional</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='subsysType'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>usb</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>pci</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>scsi</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='capsType'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='pciBackend'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </hostdev>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <rng supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='model'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio-transitional</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio-non-transitional</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='backendModel'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>random</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>egd</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>builtin</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </rng>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <filesystem supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='driverType'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>path</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>handle</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtiofs</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </filesystem>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <tpm supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='model'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>tpm-tis</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>tpm-crb</value>
Oct 11 04:46:12 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev b9e36bf3-252b-4e35-96f8-3b5ee81bb928 does not exist
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 420185d4-7d3b-49e5-a381-b1893dc70baf does not exist
Oct 11 04:46:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:46:12 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:46:12 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 9b8a60fc-2077-4b41-950b-8be3602ee8c6 does not exist
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='backendModel'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>emulator</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>external</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='backendVersion'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>2.0</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </tpm>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <redirdev supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='bus'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>usb</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </redirdev>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <channel supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='type'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>pty</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>unix</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </channel>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <crypto supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='model'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='type'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>qemu</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='backendModel'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>builtin</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </crypto>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <interface supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='backendType'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>default</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>passt</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </interface>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <panic supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='model'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>isa</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>hyperv</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </panic>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </devices>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <features>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <gic supported='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <vmcoreinfo supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <genid supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <backingStoreInput supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <backup supported='yes'/>
Oct 11 04:46:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <async-teardown supported='yes'/>
Oct 11 04:46:12 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <ps2 supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <sev supported='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <sgx supported='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <hyperv supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='features'>
Oct 11 04:46:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>relaxed</value>
Oct 11 04:46:12 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>vapic</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>spinlocks</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>vpindex</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>runtime</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>synic</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>stimer</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>reset</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>vendor_id</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>frequencies</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>reenlightenment</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>tlbflush</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>ipi</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>avic</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>emsr_bitmap</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>xmm_input</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </hyperv>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <launchSecurity supported='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </features>
Oct 11 04:46:12 compute-0 nova_compute[259400]: </domainCapabilities>
Oct 11 04:46:12 compute-0 nova_compute[259400]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.211 2 DEBUG nova.virt.libvirt.host [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 11 04:46:12 compute-0 nova_compute[259400]: <domainCapabilities>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <path>/usr/libexec/qemu-kvm</path>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <domain>kvm</domain>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <arch>x86_64</arch>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <vcpu max='4096'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <iothreads supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <os supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <enum name='firmware'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <value>efi</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <loader supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='type'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>rom</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>pflash</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='readonly'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>yes</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>no</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='secure'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>yes</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>no</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </loader>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </os>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <cpu>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <mode name='host-passthrough' supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='hostPassthroughMigratable'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>on</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>off</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </mode>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <mode name='maximum' supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='maximumMigratable'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>on</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>off</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </mode>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <mode name='host-model' supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <vendor>AMD</vendor>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='x2apic'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='tsc-deadline'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='hypervisor'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='tsc_adjust'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='spec-ctrl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='stibp'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='arch-capabilities'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='ssbd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='cmp_legacy'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='overflow-recov'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='succor'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='ibrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='amd-ssbd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='virt-ssbd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='lbrv'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='tsc-scale'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='vmcb-clean'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='flushbyasid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='pause-filter'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='pfthreshold'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='svme-addr-chk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='rdctl-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='mds-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='gds-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='require' name='rfds-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <feature policy='disable' name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </mode>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <mode name='custom' supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-noTSX'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Broadwell-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cascadelake-Server-v5'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cooperlake'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cooperlake-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Cooperlake-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Denverton'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mpx'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Denverton-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mpx'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Denverton-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Denverton-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Dhyana-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Genoa'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amd-psfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='auto-ibrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='stibp-always-on'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Genoa-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amd-psfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='auto-ibrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='stibp-always-on'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Milan'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Milan-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Milan-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amd-psfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='no-nested-data-bp'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='null-sel-clr-base'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='stibp-always-on'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Rome'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Rome-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Rome-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-Rome-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='EPYC-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='GraniteRapids'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mcdt-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pbrsb-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='prefetchiti'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='GraniteRapids-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mcdt-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pbrsb-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='prefetchiti'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='GraniteRapids-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx10'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx10-128'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx10-256'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx10-512'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mcdt-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pbrsb-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='prefetchiti'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-noTSX'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Haswell-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-noTSX'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v5'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v6'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Icelake-Server-v7'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='IvyBridge'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='IvyBridge-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='IvyBridge-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='IvyBridge-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='KnightsMill'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-4fmaps'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-4vnniw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512er'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512pf'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='KnightsMill-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-4fmaps'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-4vnniw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512er'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512pf'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Opteron_G4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fma4'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xop'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Opteron_G4-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fma4'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xop'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Opteron_G5'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fma4'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tbm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xop'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Opteron_G5-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fma4'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tbm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xop'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SapphireRapids'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SapphireRapids-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 sudo[259895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SapphireRapids-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 sudo[259895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SapphireRapids-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='amx-tile'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-bf16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-fp16'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512-vpopcntdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bitalg'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vbmi2'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrc'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fzrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='la57'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='taa-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='tsx-ldtrk'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xfd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SierraForest'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-ne-convert'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cmpccxadd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mcdt-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pbrsb-no'/>
Oct 11 04:46:12 compute-0 sudo[259895]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='SierraForest-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-ifma'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-ne-convert'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx-vnni-int8'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='bus-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cmpccxadd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fbsdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='fsrs'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ibrs-all'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mcdt-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pbrsb-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='psdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='sbdr-ssdp-no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='serialize'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vaes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='vpclmulqdq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Client-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='hle'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='rtm'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Skylake-Server-v5'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512bw'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512cd'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512dq'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512f'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='avx512vl'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='invpcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pcid'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='pku'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Snowridge'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='core-capability'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mpx'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='split-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Snowridge-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='core-capability'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='mpx'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='split-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Snowridge-v2'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='core-capability'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='split-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Snowridge-v3'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='core-capability'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='split-lock-detect'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='Snowridge-v4'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='cldemote'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='erms'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='gfni'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdir64b'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='movdiri'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='xsaves'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='athlon'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnow'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnowext'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='athlon-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnow'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnowext'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='core2duo'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='core2duo-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='coreduo'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='coreduo-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='n270'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='n270-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='ss'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='phenom'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnow'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnowext'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <blockers model='phenom-v1'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnow'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <feature name='3dnowext'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </blockers>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </mode>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </cpu>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <memoryBacking supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <enum name='sourceType'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <value>file</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <value>anonymous</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <value>memfd</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </memoryBacking>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <devices>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <disk supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='diskDevice'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>disk</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>cdrom</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>floppy</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>lun</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='bus'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>fdc</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>scsi</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>usb</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>sata</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='model'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio-transitional</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio-non-transitional</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </disk>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <graphics supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='type'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>vnc</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>egl-headless</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>dbus</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </graphics>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <video supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='modelType'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>vga</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>cirrus</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>none</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>bochs</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>ramfb</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </video>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <hostdev supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='mode'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>subsystem</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='startupPolicy'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>default</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>mandatory</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>requisite</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>optional</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='subsysType'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>usb</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>pci</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>scsi</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='capsType'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='pciBackend'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </hostdev>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <rng supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='model'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio-transitional</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtio-non-transitional</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='backendModel'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>random</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>egd</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>builtin</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </rng>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <filesystem supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='driverType'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>path</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>handle</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>virtiofs</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </filesystem>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <tpm supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='model'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>tpm-tis</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>tpm-crb</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='backendModel'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>emulator</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>external</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='backendVersion'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>2.0</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </tpm>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <redirdev supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='bus'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>usb</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </redirdev>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <channel supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='type'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>pty</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>unix</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </channel>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <crypto supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='model'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='type'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>qemu</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='backendModel'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>builtin</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </crypto>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <interface supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='backendType'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>default</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>passt</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </interface>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <panic supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='model'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>isa</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>hyperv</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </panic>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </devices>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   <features>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <gic supported='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <vmcoreinfo supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <genid supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <backingStoreInput supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <backup supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <async-teardown supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <ps2 supported='yes'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <sev supported='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <sgx supported='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <hyperv supported='yes'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       <enum name='features'>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>relaxed</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>vapic</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>spinlocks</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>vpindex</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>runtime</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>synic</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>stimer</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>reset</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>vendor_id</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>frequencies</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>reenlightenment</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>tlbflush</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>ipi</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>avic</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>emsr_bitmap</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:         <value>xmm_input</value>
Oct 11 04:46:12 compute-0 nova_compute[259400]:       </enum>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     </hyperv>
Oct 11 04:46:12 compute-0 nova_compute[259400]:     <launchSecurity supported='no'/>
Oct 11 04:46:12 compute-0 nova_compute[259400]:   </features>
Oct 11 04:46:12 compute-0 nova_compute[259400]: </domainCapabilities>
Oct 11 04:46:12 compute-0 nova_compute[259400]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.256 2 DEBUG nova.virt.libvirt.host [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.257 2 DEBUG nova.virt.libvirt.host [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.257 2 DEBUG nova.virt.libvirt.host [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.257 2 INFO nova.virt.libvirt.host [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Secure Boot support detected
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.262 2 INFO nova.virt.libvirt.driver [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.263 2 INFO nova.virt.libvirt.driver [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.280 2 DEBUG nova.virt.libvirt.driver [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.311 2 INFO nova.virt.node [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Determined node identity 1f05a244-23b6-4149-9b5a-a525e5860d18 from /var/lib/nova/compute_id
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.344 2 WARNING nova.compute.manager [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Compute nodes ['1f05a244-23b6-4149-9b5a-a525e5860d18'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.392 2 INFO nova.compute.manager [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 11 04:46:12 compute-0 sudo[259920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:46:12 compute-0 sudo[259920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:46:12 compute-0 sudo[259920]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.427 2 WARNING nova.compute.manager [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.427 2 DEBUG oslo_concurrency.lockutils [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.427 2 DEBUG oslo_concurrency.lockutils [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.427 2 DEBUG oslo_concurrency.lockutils [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.427 2 DEBUG nova.compute.resource_tracker [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.428 2 DEBUG oslo_concurrency.processutils [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:46:12 compute-0 sudo[259945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:46:12 compute-0 sudo[259945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:46:12 compute-0 sudo[259945]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:12 compute-0 sudo[259971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:46:12 compute-0 sudo[259971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:46:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:12 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:46:12 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:46:12 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:46:12 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:46:12 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:46:12 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:46:12 compute-0 podman[260054]: 2025-10-11 04:46:12.812629818 +0000 UTC m=+0.042824836 container create aa2abf4de3f81c22480ae05aa25727725461bb98f2ccee7c2287165db3935ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lumiere, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:46:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:46:12 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3168230117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:46:12 compute-0 systemd[1]: Started libpod-conmon-aa2abf4de3f81c22480ae05aa25727725461bb98f2ccee7c2287165db3935ab4.scope.
Oct 11 04:46:12 compute-0 nova_compute[259400]: 2025-10-11 04:46:12.865 2 DEBUG oslo_concurrency.processutils [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:46:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:46:12 compute-0 podman[260054]: 2025-10-11 04:46:12.793804631 +0000 UTC m=+0.023999669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:46:12 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 11 04:46:12 compute-0 podman[260054]: 2025-10-11 04:46:12.901420197 +0000 UTC m=+0.131615225 container init aa2abf4de3f81c22480ae05aa25727725461bb98f2ccee7c2287165db3935ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lumiere, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:46:12 compute-0 podman[260054]: 2025-10-11 04:46:12.907982713 +0000 UTC m=+0.138177731 container start aa2abf4de3f81c22480ae05aa25727725461bb98f2ccee7c2287165db3935ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lumiere, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:46:12 compute-0 podman[260054]: 2025-10-11 04:46:12.911322898 +0000 UTC m=+0.141517936 container attach aa2abf4de3f81c22480ae05aa25727725461bb98f2ccee7c2287165db3935ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lumiere, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:46:12 compute-0 systemd[1]: libpod-aa2abf4de3f81c22480ae05aa25727725461bb98f2ccee7c2287165db3935ab4.scope: Deactivated successfully.
Oct 11 04:46:12 compute-0 stoic_lumiere[260072]: 167 167
Oct 11 04:46:12 compute-0 conmon[260072]: conmon aa2abf4de3f81c22480a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa2abf4de3f81c22480ae05aa25727725461bb98f2ccee7c2287165db3935ab4.scope/container/memory.events
Oct 11 04:46:12 compute-0 podman[260054]: 2025-10-11 04:46:12.914124009 +0000 UTC m=+0.144319047 container died aa2abf4de3f81c22480ae05aa25727725461bb98f2ccee7c2287165db3935ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:46:12 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 11 04:46:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3c94490a91a1aa75dcfd1d2c1b3dabe8de3c84e8bf93d9c7c13d841510bbc8f-merged.mount: Deactivated successfully.
Oct 11 04:46:12 compute-0 podman[260054]: 2025-10-11 04:46:12.9500919 +0000 UTC m=+0.180286928 container remove aa2abf4de3f81c22480ae05aa25727725461bb98f2ccee7c2287165db3935ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:46:12 compute-0 systemd[1]: libpod-conmon-aa2abf4de3f81c22480ae05aa25727725461bb98f2ccee7c2287165db3935ab4.scope: Deactivated successfully.
Oct 11 04:46:13 compute-0 podman[260119]: 2025-10-11 04:46:13.123878873 +0000 UTC m=+0.047253738 container create f73f6c3aad8966a68dec24a8ab0ba28d11d5b037721ae610626eec379080367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:46:13 compute-0 nova_compute[259400]: 2025-10-11 04:46:13.162 2 WARNING nova.virt.libvirt.driver [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:46:13 compute-0 nova_compute[259400]: 2025-10-11 04:46:13.163 2 DEBUG nova.compute.resource_tracker [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5174MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:46:13 compute-0 nova_compute[259400]: 2025-10-11 04:46:13.163 2 DEBUG oslo_concurrency.lockutils [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:46:13 compute-0 nova_compute[259400]: 2025-10-11 04:46:13.164 2 DEBUG oslo_concurrency.lockutils [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:46:13 compute-0 systemd[1]: Started libpod-conmon-f73f6c3aad8966a68dec24a8ab0ba28d11d5b037721ae610626eec379080367c.scope.
Oct 11 04:46:13 compute-0 podman[260119]: 2025-10-11 04:46:13.098464949 +0000 UTC m=+0.021839864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:46:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38dfb4cf0e54cd4fa7ae81161b337c523521673760a74d68e60bff0e44edcb39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38dfb4cf0e54cd4fa7ae81161b337c523521673760a74d68e60bff0e44edcb39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38dfb4cf0e54cd4fa7ae81161b337c523521673760a74d68e60bff0e44edcb39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38dfb4cf0e54cd4fa7ae81161b337c523521673760a74d68e60bff0e44edcb39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38dfb4cf0e54cd4fa7ae81161b337c523521673760a74d68e60bff0e44edcb39/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:13 compute-0 podman[260119]: 2025-10-11 04:46:13.216427278 +0000 UTC m=+0.139802163 container init f73f6c3aad8966a68dec24a8ab0ba28d11d5b037721ae610626eec379080367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:46:13 compute-0 podman[260119]: 2025-10-11 04:46:13.222744718 +0000 UTC m=+0.146119583 container start f73f6c3aad8966a68dec24a8ab0ba28d11d5b037721ae610626eec379080367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:46:13 compute-0 podman[260119]: 2025-10-11 04:46:13.225429376 +0000 UTC m=+0.148804331 container attach f73f6c3aad8966a68dec24a8ab0ba28d11d5b037721ae610626eec379080367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:46:13 compute-0 nova_compute[259400]: 2025-10-11 04:46:13.235 2 WARNING nova.compute.resource_tracker [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] No compute node record for compute-0.ctlplane.example.com:1f05a244-23b6-4149-9b5a-a525e5860d18: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 1f05a244-23b6-4149-9b5a-a525e5860d18 could not be found.
Oct 11 04:46:13 compute-0 nova_compute[259400]: 2025-10-11 04:46:13.258 2 INFO nova.compute.resource_tracker [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 1f05a244-23b6-4149-9b5a-a525e5860d18
Oct 11 04:46:13 compute-0 nova_compute[259400]: 2025-10-11 04:46:13.357 2 DEBUG nova.compute.resource_tracker [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:46:13 compute-0 nova_compute[259400]: 2025-10-11 04:46:13.358 2 DEBUG nova.compute.resource_tracker [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:46:13 compute-0 ceph-mon[74243]: pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:13 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3168230117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:46:14 compute-0 vigilant_sanderson[260135]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:46:14 compute-0 vigilant_sanderson[260135]: --> relative data size: 1.0
Oct 11 04:46:14 compute-0 vigilant_sanderson[260135]: --> All data devices are unavailable
Oct 11 04:46:14 compute-0 systemd[1]: libpod-f73f6c3aad8966a68dec24a8ab0ba28d11d5b037721ae610626eec379080367c.scope: Deactivated successfully.
Oct 11 04:46:14 compute-0 systemd[1]: libpod-f73f6c3aad8966a68dec24a8ab0ba28d11d5b037721ae610626eec379080367c.scope: Consumed 1.057s CPU time.
Oct 11 04:46:14 compute-0 conmon[260135]: conmon f73f6c3aad8966a68dec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f73f6c3aad8966a68dec24a8ab0ba28d11d5b037721ae610626eec379080367c.scope/container/memory.events
Oct 11 04:46:14 compute-0 podman[260119]: 2025-10-11 04:46:14.336917665 +0000 UTC m=+1.260292570 container died f73f6c3aad8966a68dec24a8ab0ba28d11d5b037721ae610626eec379080367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:46:14 compute-0 nova_compute[259400]: 2025-10-11 04:46:14.344 2 INFO nova.scheduler.client.report [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] [req-3dd1922b-e1e2-48e0-8e01-50ee9620c074] Created resource provider record via placement API for resource provider with UUID 1f05a244-23b6-4149-9b5a-a525e5860d18 and name compute-0.ctlplane.example.com.
Oct 11 04:46:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-38dfb4cf0e54cd4fa7ae81161b337c523521673760a74d68e60bff0e44edcb39-merged.mount: Deactivated successfully.
Oct 11 04:46:14 compute-0 podman[260119]: 2025-10-11 04:46:14.417538127 +0000 UTC m=+1.340913022 container remove f73f6c3aad8966a68dec24a8ab0ba28d11d5b037721ae610626eec379080367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:46:14 compute-0 systemd[1]: libpod-conmon-f73f6c3aad8966a68dec24a8ab0ba28d11d5b037721ae610626eec379080367c.scope: Deactivated successfully.
Oct 11 04:46:14 compute-0 sudo[259971]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:14 compute-0 sudo[260176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:46:14 compute-0 sudo[260176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:46:14 compute-0 sudo[260176]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:14 compute-0 sudo[260201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:46:14 compute-0 sudo[260201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:46:14 compute-0 sudo[260201]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:14 compute-0 sudo[260226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:46:14 compute-0 sudo[260226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:46:14 compute-0 sudo[260226]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:14 compute-0 nova_compute[259400]: 2025-10-11 04:46:14.744 2 DEBUG oslo_concurrency.processutils [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:46:14 compute-0 sudo[260252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:46:14 compute-0 sudo[260252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:46:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:46:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:46:15 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/908699787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:46:15 compute-0 nova_compute[259400]: 2025-10-11 04:46:15.219 2 DEBUG oslo_concurrency.processutils [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:46:15 compute-0 nova_compute[259400]: 2025-10-11 04:46:15.226 2 DEBUG nova.virt.libvirt.host [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 11 04:46:15 compute-0 nova_compute[259400]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Oct 11 04:46:15 compute-0 nova_compute[259400]: 2025-10-11 04:46:15.226 2 INFO nova.virt.libvirt.host [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] kernel doesn't support AMD SEV
Oct 11 04:46:15 compute-0 nova_compute[259400]: 2025-10-11 04:46:15.228 2 DEBUG nova.compute.provider_tree [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Updating inventory in ProviderTree for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 04:46:15 compute-0 nova_compute[259400]: 2025-10-11 04:46:15.229 2 DEBUG nova.virt.libvirt.driver [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:46:15 compute-0 podman[260337]: 2025-10-11 04:46:15.276715863 +0000 UTC m=+0.044306244 container create 4bae0e34fea94de7c4a7df06288055397428f7772150e593b3a9f2d0fc9654e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_maxwell, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:46:15 compute-0 nova_compute[259400]: 2025-10-11 04:46:15.310 2 DEBUG nova.scheduler.client.report [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Updated inventory for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 11 04:46:15 compute-0 nova_compute[259400]: 2025-10-11 04:46:15.310 2 DEBUG nova.compute.provider_tree [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Updating resource provider 1f05a244-23b6-4149-9b5a-a525e5860d18 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 11 04:46:15 compute-0 nova_compute[259400]: 2025-10-11 04:46:15.311 2 DEBUG nova.compute.provider_tree [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Updating inventory in ProviderTree for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 04:46:15 compute-0 systemd[1]: Started libpod-conmon-4bae0e34fea94de7c4a7df06288055397428f7772150e593b3a9f2d0fc9654e6.scope.
Oct 11 04:46:15 compute-0 podman[260337]: 2025-10-11 04:46:15.258001619 +0000 UTC m=+0.025591990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:46:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:46:15 compute-0 podman[260337]: 2025-10-11 04:46:15.36464672 +0000 UTC m=+0.132237151 container init 4bae0e34fea94de7c4a7df06288055397428f7772150e593b3a9f2d0fc9654e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:46:15 compute-0 podman[260337]: 2025-10-11 04:46:15.376033659 +0000 UTC m=+0.143624040 container start 4bae0e34fea94de7c4a7df06288055397428f7772150e593b3a9f2d0fc9654e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:46:15 compute-0 podman[260337]: 2025-10-11 04:46:15.381008235 +0000 UTC m=+0.148598586 container attach 4bae0e34fea94de7c4a7df06288055397428f7772150e593b3a9f2d0fc9654e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:46:15 compute-0 awesome_maxwell[260353]: 167 167
Oct 11 04:46:15 compute-0 systemd[1]: libpod-4bae0e34fea94de7c4a7df06288055397428f7772150e593b3a9f2d0fc9654e6.scope: Deactivated successfully.
Oct 11 04:46:15 compute-0 podman[260337]: 2025-10-11 04:46:15.382850292 +0000 UTC m=+0.150440653 container died 4bae0e34fea94de7c4a7df06288055397428f7772150e593b3a9f2d0fc9654e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_maxwell, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:46:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e7ed86d66997d6800f7391240053969c90fd5a99327751f6d01b660dc31e4f0-merged.mount: Deactivated successfully.
Oct 11 04:46:15 compute-0 podman[260337]: 2025-10-11 04:46:15.427248206 +0000 UTC m=+0.194838607 container remove 4bae0e34fea94de7c4a7df06288055397428f7772150e593b3a9f2d0fc9654e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_maxwell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 04:46:15 compute-0 systemd[1]: libpod-conmon-4bae0e34fea94de7c4a7df06288055397428f7772150e593b3a9f2d0fc9654e6.scope: Deactivated successfully.
Oct 11 04:46:15 compute-0 nova_compute[259400]: 2025-10-11 04:46:15.463 2 DEBUG nova.compute.provider_tree [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Updating resource provider 1f05a244-23b6-4149-9b5a-a525e5860d18 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 11 04:46:15 compute-0 nova_compute[259400]: 2025-10-11 04:46:15.487 2 DEBUG nova.compute.resource_tracker [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:46:15 compute-0 nova_compute[259400]: 2025-10-11 04:46:15.487 2 DEBUG oslo_concurrency.lockutils [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.324s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:46:15 compute-0 nova_compute[259400]: 2025-10-11 04:46:15.487 2 DEBUG nova.service [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Oct 11 04:46:15 compute-0 ceph-mon[74243]: pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:15 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/908699787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:46:15 compute-0 nova_compute[259400]: 2025-10-11 04:46:15.623 2 DEBUG nova.service [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Oct 11 04:46:15 compute-0 nova_compute[259400]: 2025-10-11 04:46:15.624 2 DEBUG nova.servicegroup.drivers.db [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Oct 11 04:46:15 compute-0 podman[260376]: 2025-10-11 04:46:15.672317155 +0000 UTC m=+0.061561671 container create 31493b2e7d4c10eac74bbc12ebe1bef33e702d6f1fe7abc735790341b5d68419 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:46:15 compute-0 systemd[1]: Started libpod-conmon-31493b2e7d4c10eac74bbc12ebe1bef33e702d6f1fe7abc735790341b5d68419.scope.
Oct 11 04:46:15 compute-0 podman[260376]: 2025-10-11 04:46:15.652106193 +0000 UTC m=+0.041350749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:46:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/124eaf07533051845a50f5cd55df59e1659ba79c2e7591b6a072caad14de790a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/124eaf07533051845a50f5cd55df59e1659ba79c2e7591b6a072caad14de790a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/124eaf07533051845a50f5cd55df59e1659ba79c2e7591b6a072caad14de790a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/124eaf07533051845a50f5cd55df59e1659ba79c2e7591b6a072caad14de790a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:15 compute-0 podman[260376]: 2025-10-11 04:46:15.77870256 +0000 UTC m=+0.167947166 container init 31493b2e7d4c10eac74bbc12ebe1bef33e702d6f1fe7abc735790341b5d68419 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:46:15 compute-0 podman[260376]: 2025-10-11 04:46:15.793713691 +0000 UTC m=+0.182958207 container start 31493b2e7d4c10eac74bbc12ebe1bef33e702d6f1fe7abc735790341b5d68419 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shtern, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 04:46:15 compute-0 podman[260376]: 2025-10-11 04:46:15.798881522 +0000 UTC m=+0.188126118 container attach 31493b2e7d4c10eac74bbc12ebe1bef33e702d6f1fe7abc735790341b5d68419 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:46:16 compute-0 brave_shtern[260393]: {
Oct 11 04:46:16 compute-0 brave_shtern[260393]:     "0": [
Oct 11 04:46:16 compute-0 brave_shtern[260393]:         {
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "devices": [
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "/dev/loop3"
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             ],
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "lv_name": "ceph_lv0",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "lv_size": "21470642176",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "name": "ceph_lv0",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "tags": {
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.cluster_name": "ceph",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.crush_device_class": "",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.encrypted": "0",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.osd_id": "0",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.type": "block",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.vdo": "0"
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             },
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "type": "block",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "vg_name": "ceph_vg0"
Oct 11 04:46:16 compute-0 brave_shtern[260393]:         }
Oct 11 04:46:16 compute-0 brave_shtern[260393]:     ],
Oct 11 04:46:16 compute-0 brave_shtern[260393]:     "1": [
Oct 11 04:46:16 compute-0 brave_shtern[260393]:         {
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "devices": [
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "/dev/loop4"
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             ],
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "lv_name": "ceph_lv1",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "lv_size": "21470642176",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "name": "ceph_lv1",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "tags": {
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.cluster_name": "ceph",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.crush_device_class": "",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.encrypted": "0",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.osd_id": "1",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.type": "block",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.vdo": "0"
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             },
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "type": "block",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "vg_name": "ceph_vg1"
Oct 11 04:46:16 compute-0 brave_shtern[260393]:         }
Oct 11 04:46:16 compute-0 brave_shtern[260393]:     ],
Oct 11 04:46:16 compute-0 brave_shtern[260393]:     "2": [
Oct 11 04:46:16 compute-0 brave_shtern[260393]:         {
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "devices": [
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "/dev/loop5"
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             ],
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "lv_name": "ceph_lv2",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "lv_size": "21470642176",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "name": "ceph_lv2",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "tags": {
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.cluster_name": "ceph",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.crush_device_class": "",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.encrypted": "0",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.osd_id": "2",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.type": "block",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:                 "ceph.vdo": "0"
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             },
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "type": "block",
Oct 11 04:46:16 compute-0 brave_shtern[260393]:             "vg_name": "ceph_vg2"
Oct 11 04:46:16 compute-0 brave_shtern[260393]:         }
Oct 11 04:46:16 compute-0 brave_shtern[260393]:     ]
Oct 11 04:46:16 compute-0 brave_shtern[260393]: }
Oct 11 04:46:16 compute-0 systemd[1]: libpod-31493b2e7d4c10eac74bbc12ebe1bef33e702d6f1fe7abc735790341b5d68419.scope: Deactivated successfully.
Oct 11 04:46:16 compute-0 podman[260376]: 2025-10-11 04:46:16.504074487 +0000 UTC m=+0.893319043 container died 31493b2e7d4c10eac74bbc12ebe1bef33e702d6f1fe7abc735790341b5d68419 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:46:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-124eaf07533051845a50f5cd55df59e1659ba79c2e7591b6a072caad14de790a-merged.mount: Deactivated successfully.
Oct 11 04:46:16 compute-0 podman[260376]: 2025-10-11 04:46:16.587877 +0000 UTC m=+0.977121526 container remove 31493b2e7d4c10eac74bbc12ebe1bef33e702d6f1fe7abc735790341b5d68419 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shtern, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:46:16 compute-0 systemd[1]: libpod-conmon-31493b2e7d4c10eac74bbc12ebe1bef33e702d6f1fe7abc735790341b5d68419.scope: Deactivated successfully.
Oct 11 04:46:16 compute-0 sudo[260252]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:16 compute-0 sudo[260415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:46:16 compute-0 sudo[260415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:46:16 compute-0 sudo[260415]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:16 compute-0 sudo[260440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:46:16 compute-0 sudo[260440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:46:16 compute-0 sudo[260440]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:16 compute-0 sudo[260465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:46:16 compute-0 sudo[260465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:46:16 compute-0 sudo[260465]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:16 compute-0 sudo[260490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:46:16 compute-0 sudo[260490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:46:17 compute-0 podman[260558]: 2025-10-11 04:46:17.438623534 +0000 UTC m=+0.059248582 container create b9817a64df93ddbb3bfb3eaa1fb6a6e32d632cd95d746a4fcb7cfbe1649dbb5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:46:17 compute-0 systemd[1]: Started libpod-conmon-b9817a64df93ddbb3bfb3eaa1fb6a6e32d632cd95d746a4fcb7cfbe1649dbb5f.scope.
Oct 11 04:46:17 compute-0 podman[260558]: 2025-10-11 04:46:17.41044747 +0000 UTC m=+0.031072578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:46:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:46:17 compute-0 podman[260558]: 2025-10-11 04:46:17.530725467 +0000 UTC m=+0.151350535 container init b9817a64df93ddbb3bfb3eaa1fb6a6e32d632cd95d746a4fcb7cfbe1649dbb5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:46:17 compute-0 podman[260558]: 2025-10-11 04:46:17.542586278 +0000 UTC m=+0.163211356 container start b9817a64df93ddbb3bfb3eaa1fb6a6e32d632cd95d746a4fcb7cfbe1649dbb5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_leavitt, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:46:17 compute-0 podman[260558]: 2025-10-11 04:46:17.546389744 +0000 UTC m=+0.167014822 container attach b9817a64df93ddbb3bfb3eaa1fb6a6e32d632cd95d746a4fcb7cfbe1649dbb5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:46:17 compute-0 busy_leavitt[260575]: 167 167
Oct 11 04:46:17 compute-0 systemd[1]: libpod-b9817a64df93ddbb3bfb3eaa1fb6a6e32d632cd95d746a4fcb7cfbe1649dbb5f.scope: Deactivated successfully.
Oct 11 04:46:17 compute-0 podman[260558]: 2025-10-11 04:46:17.54980774 +0000 UTC m=+0.170432818 container died b9817a64df93ddbb3bfb3eaa1fb6a6e32d632cd95d746a4fcb7cfbe1649dbb5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:46:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-776c282b42c4373c6d7b99ed124af6179f16b3fce8151fe699c052903528b5f4-merged.mount: Deactivated successfully.
Oct 11 04:46:17 compute-0 podman[260558]: 2025-10-11 04:46:17.595753604 +0000 UTC m=+0.216378682 container remove b9817a64df93ddbb3bfb3eaa1fb6a6e32d632cd95d746a4fcb7cfbe1649dbb5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_leavitt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:46:17 compute-0 ceph-mon[74243]: pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:17 compute-0 systemd[1]: libpod-conmon-b9817a64df93ddbb3bfb3eaa1fb6a6e32d632cd95d746a4fcb7cfbe1649dbb5f.scope: Deactivated successfully.
Oct 11 04:46:17 compute-0 podman[260598]: 2025-10-11 04:46:17.791998706 +0000 UTC m=+0.035836689 container create 4a78537c9a697e7b4f8d6789408218b7e26a46f62fb8129288df81e4ea22dbc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mestorf, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:46:17 compute-0 systemd[1]: Started libpod-conmon-4a78537c9a697e7b4f8d6789408218b7e26a46f62fb8129288df81e4ea22dbc7.scope.
Oct 11 04:46:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:46:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1058d13b6c943b0a44f10464e96196fd3ba4d68efc85a4018e9ff9dcb669a1f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1058d13b6c943b0a44f10464e96196fd3ba4d68efc85a4018e9ff9dcb669a1f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1058d13b6c943b0a44f10464e96196fd3ba4d68efc85a4018e9ff9dcb669a1f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1058d13b6c943b0a44f10464e96196fd3ba4d68efc85a4018e9ff9dcb669a1f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:17 compute-0 podman[260598]: 2025-10-11 04:46:17.778073283 +0000 UTC m=+0.021911296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:46:17 compute-0 podman[260598]: 2025-10-11 04:46:17.885197237 +0000 UTC m=+0.129035220 container init 4a78537c9a697e7b4f8d6789408218b7e26a46f62fb8129288df81e4ea22dbc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:46:17 compute-0 podman[260598]: 2025-10-11 04:46:17.899092199 +0000 UTC m=+0.142930222 container start 4a78537c9a697e7b4f8d6789408218b7e26a46f62fb8129288df81e4ea22dbc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:46:17 compute-0 podman[260598]: 2025-10-11 04:46:17.903172403 +0000 UTC m=+0.147010436 container attach 4a78537c9a697e7b4f8d6789408218b7e26a46f62fb8129288df81e4ea22dbc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:46:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:18 compute-0 zen_mestorf[260614]: {
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:         "osd_id": 1,
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:         "type": "bluestore"
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:     },
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:         "osd_id": 0,
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:         "type": "bluestore"
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:     },
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:         "osd_id": 2,
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:         "type": "bluestore"
Oct 11 04:46:18 compute-0 zen_mestorf[260614]:     }
Oct 11 04:46:18 compute-0 zen_mestorf[260614]: }
Oct 11 04:46:18 compute-0 systemd[1]: libpod-4a78537c9a697e7b4f8d6789408218b7e26a46f62fb8129288df81e4ea22dbc7.scope: Deactivated successfully.
Oct 11 04:46:18 compute-0 systemd[1]: libpod-4a78537c9a697e7b4f8d6789408218b7e26a46f62fb8129288df81e4ea22dbc7.scope: Consumed 1.005s CPU time.
Oct 11 04:46:18 compute-0 podman[260598]: 2025-10-11 04:46:18.901927205 +0000 UTC m=+1.145765228 container died 4a78537c9a697e7b4f8d6789408218b7e26a46f62fb8129288df81e4ea22dbc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mestorf, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Oct 11 04:46:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1058d13b6c943b0a44f10464e96196fd3ba4d68efc85a4018e9ff9dcb669a1f8-merged.mount: Deactivated successfully.
Oct 11 04:46:18 compute-0 podman[260598]: 2025-10-11 04:46:18.966681765 +0000 UTC m=+1.210519748 container remove 4a78537c9a697e7b4f8d6789408218b7e26a46f62fb8129288df81e4ea22dbc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mestorf, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:46:18 compute-0 systemd[1]: libpod-conmon-4a78537c9a697e7b4f8d6789408218b7e26a46f62fb8129288df81e4ea22dbc7.scope: Deactivated successfully.
Oct 11 04:46:18 compute-0 sudo[260490]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:46:19 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:46:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:46:19 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:46:19 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev b5401bf8-88a8-4905-bbd6-ab0b4a965b69 does not exist
Oct 11 04:46:19 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 80a84903-66e5-4108-987c-2bee95f87f74 does not exist
Oct 11 04:46:19 compute-0 sudo[260663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:46:19 compute-0 sudo[260663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:46:19 compute-0 sudo[260663]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:19 compute-0 sudo[260688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:46:19 compute-0 sudo[260688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:46:19 compute-0 sudo[260688]: pam_unix(sudo:session): session closed for user root
Oct 11 04:46:19 compute-0 ceph-mon[74243]: pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:19 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:46:19 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:46:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:46:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:21 compute-0 ceph-mon[74243]: pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:22 compute-0 podman[260713]: 2025-10-11 04:46:22.446405421 +0000 UTC m=+0.093552141 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Oct 11 04:46:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:23 compute-0 ceph-mon[74243]: pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:46:25 compute-0 ceph-mon[74243]: pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:46:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:46:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:46:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:46:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:46:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:46:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:27 compute-0 ceph-mon[74243]: pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:28 compute-0 ceph-mon[74243]: pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:46:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:31 compute-0 ceph-mon[74243]: pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:33 compute-0 ceph-mon[74243]: pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:46:35 compute-0 ceph-mon[74243]: pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:37 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:46:37 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4276416805' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:46:37 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:46:37 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4276416805' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:46:37 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:46:37 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1965022925' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:46:37 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:46:37 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1965022925' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:46:37 compute-0 ceph-mon[74243]: pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:37 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/4276416805' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:46:37 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/4276416805' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:46:37 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/1965022925' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:46:37 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/1965022925' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:46:37 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:46:37 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2892667402' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:46:37 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:46:37 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2892667402' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:46:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:38 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/2892667402' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:46:38 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/2892667402' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:46:39 compute-0 ceph-mon[74243]: pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:46:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:41 compute-0 podman[260734]: 2025-10-11 04:46:41.410100813 +0000 UTC m=+0.064305300 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid)
Oct 11 04:46:41 compute-0 podman[260733]: 2025-10-11 04:46:41.46089396 +0000 UTC m=+0.111523116 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:46:41 compute-0 ceph-mon[74243]: pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:42 compute-0 podman[260779]: 2025-10-11 04:46:42.420461491 +0000 UTC m=+0.069666256 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:46:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:43 compute-0 ceph-mon[74243]: pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:44 compute-0 ceph-mon[74243]: pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:46:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:47 compute-0 ceph-mon[74243]: pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:49 compute-0 ceph-mon[74243]: pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:46:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:50 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:46:50 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5743 writes, 24K keys, 5743 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5743 writes, 944 syncs, 6.08 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 04:46:51 compute-0 ceph-mon[74243]: pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:51 compute-0 nova_compute[259400]: 2025-10-11 04:46:51.626 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:46:51 compute-0 nova_compute[259400]: 2025-10-11 04:46:51.678 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:46:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:53 compute-0 podman[260801]: 2025-10-11 04:46:53.396619692 +0000 UTC m=+0.052996664 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:46:53 compute-0 ceph-mon[74243]: pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:46:55 compute-0 ceph-mon[74243]: pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:55 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:46:55 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 7052 writes, 29K keys, 7052 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7052 writes, 1301 syncs, 5.42 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 273 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:46:56
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['.rgw.root', 'volumes', '.mgr', 'backups', 'vms', 'default.rgw.log', 'default.rgw.control', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:46:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:57 compute-0 ceph-mon[74243]: pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:46:59 compute-0 ceph-mon[74243]: pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:47:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:01 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:47:01 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5556 writes, 23K keys, 5556 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5556 writes, 855 syncs, 6.50 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 04:47:01 compute-0 ceph-mon[74243]: pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:02 compute-0 ceph-mgr[74542]: [devicehealth INFO root] Check health
Oct 11 04:47:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:03 compute-0 ceph-mon[74243]: pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:04 compute-0 ceph-mon[74243]: pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:47:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:06 compute-0 sshd-session[260820]: Invalid user ubnt from 221.159.21.170 port 33636
Oct 11 04:47:07 compute-0 sshd[188896]: Timeout before authentication for connection from 221.159.21.170 to 38.102.83.148, pid = 250684
Oct 11 04:47:07 compute-0 sshd-session[260820]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 04:47:07 compute-0 sshd-session[260820]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.159.21.170
Oct 11 04:47:07 compute-0 ceph-mon[74243]: pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:09 compute-0 ceph-mon[74243]: pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:09 compute-0 sshd-session[260820]: Failed password for invalid user ubnt from 221.159.21.170 port 33636 ssh2
Oct 11 04:47:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:47:10 compute-0 sshd-session[260820]: Connection closed by invalid user ubnt 221.159.21.170 port 33636 [preauth]
Oct 11 04:47:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:10 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:34574 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:47:11.010 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:47:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:47:11.011 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:47:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:47:11.011 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:47:11 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:34666 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.199 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.199 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.200 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.200 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.226 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.227 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.227 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.228 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.228 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.229 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.229 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.229 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.229 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.258 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.258 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.258 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.258 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.259 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:47:11 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:34748 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:11 compute-0 ceph-mon[74243]: pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:47:11 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2644994596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.731 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.967 2 WARNING nova.virt.libvirt.driver [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.970 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5185MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.970 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:47:11 compute-0 nova_compute[259400]: 2025-10-11 04:47:11.971 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:47:12 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:34834 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:12 compute-0 nova_compute[259400]: 2025-10-11 04:47:12.108 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:47:12 compute-0 nova_compute[259400]: 2025-10-11 04:47:12.109 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:47:12 compute-0 nova_compute[259400]: 2025-10-11 04:47:12.150 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:47:12 compute-0 podman[260865]: 2025-10-11 04:47:12.497025253 +0000 UTC m=+0.138990942 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 04:47:12 compute-0 podman[260864]: 2025-10-11 04:47:12.526199292 +0000 UTC m=+0.173660210 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Oct 11 04:47:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:12 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:34950 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:47:12 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2252281356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:47:12 compute-0 nova_compute[259400]: 2025-10-11 04:47:12.593 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:47:12 compute-0 podman[260906]: 2025-10-11 04:47:12.596610766 +0000 UTC m=+0.078818738 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 11 04:47:12 compute-0 nova_compute[259400]: 2025-10-11 04:47:12.602 2 DEBUG nova.compute.provider_tree [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f05a244-23b6-4149-9b5a-a525e5860d18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:47:12 compute-0 nova_compute[259400]: 2025-10-11 04:47:12.626 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:47:12 compute-0 nova_compute[259400]: 2025-10-11 04:47:12.627 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:47:12 compute-0 nova_compute[259400]: 2025-10-11 04:47:12.627 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:47:12 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2644994596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:47:12 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2252281356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:47:13 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:35034 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:13 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:35124 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:13 compute-0 ceph-mon[74243]: pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:14 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:35200 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:14 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:35296 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:47:15 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:35438 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:15 compute-0 ceph-mon[74243]: pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:15 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:35520 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:16 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:35570 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:17 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:35734 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:17 compute-0 ceph-mon[74243]: pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:18 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:35838 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:19 compute-0 sudo[260929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:47:19 compute-0 sudo[260929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:19 compute-0 sudo[260929]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:19 compute-0 sudo[260954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:47:19 compute-0 sudo[260954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:19 compute-0 sudo[260954]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:19 compute-0 sudo[260979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:47:19 compute-0 sudo[260979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:19 compute-0 sudo[260979]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:19 compute-0 sudo[261004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 11 04:47:19 compute-0 sudo[261004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:19 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:36052 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:19 compute-0 ceph-mon[74243]: pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:19 compute-0 sudo[261004]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:47:19 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:47:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:47:19 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:47:19 compute-0 sudo[261050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:47:19 compute-0 sudo[261050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:19 compute-0 sudo[261050]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:19 compute-0 sudo[261075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:47:19 compute-0 sudo[261075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:19 compute-0 sudo[261075]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:47:20 compute-0 sudo[261100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:47:20 compute-0 sudo[261100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:20 compute-0 sudo[261100]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:20 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:36136 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:20 compute-0 sudo[261125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:47:20 compute-0 sudo[261125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:20 compute-0 sudo[261125]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:20 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:36232 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:47:20 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:47:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:47:20 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:47:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:47:20 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:47:20 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev ebf4c337-a11e-4f26-8161-2219e7372220 does not exist
Oct 11 04:47:20 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 72a8730d-43ac-43b5-b766-972b1c45c9af does not exist
Oct 11 04:47:20 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 2fef2e5a-f5e1-4258-8567-c167aa977ac8 does not exist
Oct 11 04:47:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:47:20 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:47:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:47:20 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:47:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:47:20 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:47:20 compute-0 sudo[261182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:47:20 compute-0 sudo[261182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:20 compute-0 sudo[261182]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:47:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:47:20 compute-0 ceph-mon[74243]: pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:47:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:47:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:47:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:47:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:47:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:47:20 compute-0 sudo[261207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:47:20 compute-0 sudo[261207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:20 compute-0 sudo[261207]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:20 compute-0 sudo[261232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:47:20 compute-0 sudo[261232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:20 compute-0 sudo[261232]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:21 compute-0 sudo[261257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:47:21 compute-0 sudo[261257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:21 compute-0 podman[261323]: 2025-10-11 04:47:21.329709023 +0000 UTC m=+0.041311998 container create 91dd72974750ac437eb68b9cf7faa28444822591d88346997ea888824833262b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_joliot, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:47:21 compute-0 systemd[1]: Started libpod-conmon-91dd72974750ac437eb68b9cf7faa28444822591d88346997ea888824833262b.scope.
Oct 11 04:47:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:47:21 compute-0 podman[261323]: 2025-10-11 04:47:21.309135122 +0000 UTC m=+0.020738107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:47:21 compute-0 podman[261323]: 2025-10-11 04:47:21.409188466 +0000 UTC m=+0.120791431 container init 91dd72974750ac437eb68b9cf7faa28444822591d88346997ea888824833262b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 04:47:21 compute-0 podman[261323]: 2025-10-11 04:47:21.415322332 +0000 UTC m=+0.126925297 container start 91dd72974750ac437eb68b9cf7faa28444822591d88346997ea888824833262b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_joliot, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:47:21 compute-0 podman[261323]: 2025-10-11 04:47:21.418492562 +0000 UTC m=+0.130095517 container attach 91dd72974750ac437eb68b9cf7faa28444822591d88346997ea888824833262b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_joliot, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 04:47:21 compute-0 nifty_joliot[261339]: 167 167
Oct 11 04:47:21 compute-0 systemd[1]: libpod-91dd72974750ac437eb68b9cf7faa28444822591d88346997ea888824833262b.scope: Deactivated successfully.
Oct 11 04:47:21 compute-0 podman[261323]: 2025-10-11 04:47:21.422510624 +0000 UTC m=+0.134113609 container died 91dd72974750ac437eb68b9cf7faa28444822591d88346997ea888824833262b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_joliot, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:47:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-c82bccb07880a368e7c21520046dea41113016557aa09484d38dd1a5fe8b7548-merged.mount: Deactivated successfully.
Oct 11 04:47:21 compute-0 podman[261323]: 2025-10-11 04:47:21.466538369 +0000 UTC m=+0.178141334 container remove 91dd72974750ac437eb68b9cf7faa28444822591d88346997ea888824833262b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_joliot, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:47:21 compute-0 systemd[1]: libpod-conmon-91dd72974750ac437eb68b9cf7faa28444822591d88346997ea888824833262b.scope: Deactivated successfully.
Oct 11 04:47:21 compute-0 podman[261363]: 2025-10-11 04:47:21.694632958 +0000 UTC m=+0.066502856 container create 1475080e2d042b8730535dc11c4f1e6d57580214da713e9f8b3ecf4e296e2fdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ramanujan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:47:21 compute-0 systemd[1]: Started libpod-conmon-1475080e2d042b8730535dc11c4f1e6d57580214da713e9f8b3ecf4e296e2fdf.scope.
Oct 11 04:47:21 compute-0 podman[261363]: 2025-10-11 04:47:21.665224533 +0000 UTC m=+0.037094491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:47:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f34848acfb93397d971d42fdb7c961b5687e06c001a2a1e65f874ba3d287d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f34848acfb93397d971d42fdb7c961b5687e06c001a2a1e65f874ba3d287d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f34848acfb93397d971d42fdb7c961b5687e06c001a2a1e65f874ba3d287d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f34848acfb93397d971d42fdb7c961b5687e06c001a2a1e65f874ba3d287d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f34848acfb93397d971d42fdb7c961b5687e06c001a2a1e65f874ba3d287d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:21 compute-0 podman[261363]: 2025-10-11 04:47:21.793901883 +0000 UTC m=+0.165771821 container init 1475080e2d042b8730535dc11c4f1e6d57580214da713e9f8b3ecf4e296e2fdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ramanujan, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:47:21 compute-0 podman[261363]: 2025-10-11 04:47:21.807402975 +0000 UTC m=+0.179272883 container start 1475080e2d042b8730535dc11c4f1e6d57580214da713e9f8b3ecf4e296e2fdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:47:21 compute-0 podman[261363]: 2025-10-11 04:47:21.811366356 +0000 UTC m=+0.183236324 container attach 1475080e2d042b8730535dc11c4f1e6d57580214da713e9f8b3ecf4e296e2fdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ramanujan, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:47:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:22 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:36288 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:22 compute-0 hungry_ramanujan[261379]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:47:22 compute-0 hungry_ramanujan[261379]: --> relative data size: 1.0
Oct 11 04:47:22 compute-0 hungry_ramanujan[261379]: --> All data devices are unavailable
Oct 11 04:47:22 compute-0 systemd[1]: libpod-1475080e2d042b8730535dc11c4f1e6d57580214da713e9f8b3ecf4e296e2fdf.scope: Deactivated successfully.
Oct 11 04:47:22 compute-0 systemd[1]: libpod-1475080e2d042b8730535dc11c4f1e6d57580214da713e9f8b3ecf4e296e2fdf.scope: Consumed 1.052s CPU time.
Oct 11 04:47:22 compute-0 podman[261363]: 2025-10-11 04:47:22.920824912 +0000 UTC m=+1.292694850 container died 1475080e2d042b8730535dc11c4f1e6d57580214da713e9f8b3ecf4e296e2fdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:47:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-22f34848acfb93397d971d42fdb7c961b5687e06c001a2a1e65f874ba3d287d9-merged.mount: Deactivated successfully.
Oct 11 04:47:22 compute-0 podman[261363]: 2025-10-11 04:47:22.993130564 +0000 UTC m=+1.365000442 container remove 1475080e2d042b8730535dc11c4f1e6d57580214da713e9f8b3ecf4e296e2fdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:47:23 compute-0 systemd[1]: libpod-conmon-1475080e2d042b8730535dc11c4f1e6d57580214da713e9f8b3ecf4e296e2fdf.scope: Deactivated successfully.
Oct 11 04:47:23 compute-0 sudo[261257]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:23 compute-0 sudo[261423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:47:23 compute-0 sudo[261423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:23 compute-0 sudo[261423]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:23 compute-0 sudo[261448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:47:23 compute-0 sudo[261448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:23 compute-0 sudo[261448]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:23 compute-0 sudo[261473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:47:23 compute-0 sudo[261473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:23 compute-0 sudo[261473]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:23 compute-0 sudo[261498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:47:23 compute-0 sudo[261498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:23 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:36602 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:23 compute-0 podman[261522]: 2025-10-11 04:47:23.551686675 +0000 UTC m=+0.084826020 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent)
Oct 11 04:47:23 compute-0 ceph-mon[74243]: pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:23 compute-0 podman[261580]: 2025-10-11 04:47:23.906006821 +0000 UTC m=+0.058415861 container create e2155319db4fb056bd4c96f571291ff80d44eaaed3a71a18accba99b23961f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:47:23 compute-0 systemd[1]: Started libpod-conmon-e2155319db4fb056bd4c96f571291ff80d44eaaed3a71a18accba99b23961f60.scope.
Oct 11 04:47:23 compute-0 podman[261580]: 2025-10-11 04:47:23.87988854 +0000 UTC m=+0.032297660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:47:23 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:36682 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:47:24 compute-0 podman[261580]: 2025-10-11 04:47:24.007088722 +0000 UTC m=+0.159497782 container init e2155319db4fb056bd4c96f571291ff80d44eaaed3a71a18accba99b23961f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:47:24 compute-0 podman[261580]: 2025-10-11 04:47:24.025255692 +0000 UTC m=+0.177664732 container start e2155319db4fb056bd4c96f571291ff80d44eaaed3a71a18accba99b23961f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:47:24 compute-0 podman[261580]: 2025-10-11 04:47:24.02870332 +0000 UTC m=+0.181112380 container attach e2155319db4fb056bd4c96f571291ff80d44eaaed3a71a18accba99b23961f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:47:24 compute-0 charming_mirzakhani[261596]: 167 167
Oct 11 04:47:24 compute-0 systemd[1]: libpod-e2155319db4fb056bd4c96f571291ff80d44eaaed3a71a18accba99b23961f60.scope: Deactivated successfully.
Oct 11 04:47:24 compute-0 conmon[261596]: conmon e2155319db4fb056bd4c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2155319db4fb056bd4c96f571291ff80d44eaaed3a71a18accba99b23961f60.scope/container/memory.events
Oct 11 04:47:24 compute-0 podman[261580]: 2025-10-11 04:47:24.037458961 +0000 UTC m=+0.189868041 container died e2155319db4fb056bd4c96f571291ff80d44eaaed3a71a18accba99b23961f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:47:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-89af534c226ea615f78b6c788ea5fc4b2e5269a43b6ee0076f73e5e49bbb28f7-merged.mount: Deactivated successfully.
Oct 11 04:47:24 compute-0 podman[261580]: 2025-10-11 04:47:24.077434384 +0000 UTC m=+0.229843434 container remove e2155319db4fb056bd4c96f571291ff80d44eaaed3a71a18accba99b23961f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:47:24 compute-0 systemd[1]: libpod-conmon-e2155319db4fb056bd4c96f571291ff80d44eaaed3a71a18accba99b23961f60.scope: Deactivated successfully.
Oct 11 04:47:24 compute-0 podman[261620]: 2025-10-11 04:47:24.291912068 +0000 UTC m=+0.067643855 container create 1166673a6fe410a5c661c06eade9be2448418580d0e09be83d6c6b49d6e84ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nobel, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:47:24 compute-0 systemd[1]: Started libpod-conmon-1166673a6fe410a5c661c06eade9be2448418580d0e09be83d6c6b49d6e84ae6.scope.
Oct 11 04:47:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:47:24 compute-0 podman[261620]: 2025-10-11 04:47:24.268237398 +0000 UTC m=+0.043969225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b346356f3444125d283d427d33959fc21643207f84d909ead9e93b2a4e433691/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b346356f3444125d283d427d33959fc21643207f84d909ead9e93b2a4e433691/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b346356f3444125d283d427d33959fc21643207f84d909ead9e93b2a4e433691/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b346356f3444125d283d427d33959fc21643207f84d909ead9e93b2a4e433691/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:24 compute-0 podman[261620]: 2025-10-11 04:47:24.378091251 +0000 UTC m=+0.153823078 container init 1166673a6fe410a5c661c06eade9be2448418580d0e09be83d6c6b49d6e84ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:47:24 compute-0 podman[261620]: 2025-10-11 04:47:24.392526437 +0000 UTC m=+0.168258224 container start 1166673a6fe410a5c661c06eade9be2448418580d0e09be83d6c6b49d6e84ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nobel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:47:24 compute-0 podman[261620]: 2025-10-11 04:47:24.39581635 +0000 UTC m=+0.171548187 container attach 1166673a6fe410a5c661c06eade9be2448418580d0e09be83d6c6b49d6e84ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:47:24 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:36790 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:47:25 compute-0 agitated_nobel[261637]: {
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:     "0": [
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:         {
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "devices": [
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "/dev/loop3"
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             ],
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "lv_name": "ceph_lv0",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "lv_size": "21470642176",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "name": "ceph_lv0",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "tags": {
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.cluster_name": "ceph",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.crush_device_class": "",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.encrypted": "0",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.osd_id": "0",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.type": "block",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.vdo": "0"
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             },
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "type": "block",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "vg_name": "ceph_vg0"
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:         }
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:     ],
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:     "1": [
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:         {
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "devices": [
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "/dev/loop4"
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             ],
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "lv_name": "ceph_lv1",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "lv_size": "21470642176",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "name": "ceph_lv1",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "tags": {
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.cluster_name": "ceph",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.crush_device_class": "",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.encrypted": "0",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.osd_id": "1",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.type": "block",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.vdo": "0"
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             },
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "type": "block",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "vg_name": "ceph_vg1"
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:         }
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:     ],
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:     "2": [
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:         {
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "devices": [
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "/dev/loop5"
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             ],
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "lv_name": "ceph_lv2",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "lv_size": "21470642176",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "name": "ceph_lv2",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "tags": {
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.cluster_name": "ceph",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.crush_device_class": "",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.encrypted": "0",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.osd_id": "2",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.type": "block",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:                 "ceph.vdo": "0"
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             },
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "type": "block",
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:             "vg_name": "ceph_vg2"
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:         }
Oct 11 04:47:25 compute-0 agitated_nobel[261637]:     ]
Oct 11 04:47:25 compute-0 agitated_nobel[261637]: }
Oct 11 04:47:25 compute-0 systemd[1]: libpod-1166673a6fe410a5c661c06eade9be2448418580d0e09be83d6c6b49d6e84ae6.scope: Deactivated successfully.
Oct 11 04:47:25 compute-0 podman[261620]: 2025-10-11 04:47:25.148047078 +0000 UTC m=+0.923778875 container died 1166673a6fe410a5c661c06eade9be2448418580d0e09be83d6c6b49d6e84ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:47:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b346356f3444125d283d427d33959fc21643207f84d909ead9e93b2a4e433691-merged.mount: Deactivated successfully.
Oct 11 04:47:25 compute-0 podman[261620]: 2025-10-11 04:47:25.227236604 +0000 UTC m=+1.002968391 container remove 1166673a6fe410a5c661c06eade9be2448418580d0e09be83d6c6b49d6e84ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nobel, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 04:47:25 compute-0 systemd[1]: libpod-conmon-1166673a6fe410a5c661c06eade9be2448418580d0e09be83d6c6b49d6e84ae6.scope: Deactivated successfully.
Oct 11 04:47:25 compute-0 sudo[261498]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:25 compute-0 sudo[261660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:47:25 compute-0 sudo[261660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:25 compute-0 sudo[261660]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:25 compute-0 sudo[261685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:47:25 compute-0 sudo[261685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:25 compute-0 sudo[261685]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:25 compute-0 sudo[261710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:47:25 compute-0 sudo[261710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:25 compute-0 sudo[261710]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:25 compute-0 sudo[261735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:47:25 compute-0 sudo[261735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:25 compute-0 ceph-mon[74243]: pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct 11 04:47:25 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4230892739' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 11 04:47:25 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14343 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 11 04:47:25 compute-0 ceph-mgr[74542]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 11 04:47:25 compute-0 ceph-mgr[74542]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 11 04:47:25 compute-0 podman[261798]: 2025-10-11 04:47:25.944731182 +0000 UTC m=+0.062000762 container create 5279c99cda5426b7e3d9f2c34d211fd9310ac1f2ff5466e92a086d9f81fcca1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hodgkin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:47:26 compute-0 systemd[1]: Started libpod-conmon-5279c99cda5426b7e3d9f2c34d211fd9310ac1f2ff5466e92a086d9f81fcca1a.scope.
Oct 11 04:47:26 compute-0 podman[261798]: 2025-10-11 04:47:25.915650435 +0000 UTC m=+0.032920085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:47:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:47:26 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:36894 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:26 compute-0 podman[261798]: 2025-10-11 04:47:26.051024475 +0000 UTC m=+0.168294095 container init 5279c99cda5426b7e3d9f2c34d211fd9310ac1f2ff5466e92a086d9f81fcca1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hodgkin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:47:26 compute-0 podman[261798]: 2025-10-11 04:47:26.056886863 +0000 UTC m=+0.174156473 container start 5279c99cda5426b7e3d9f2c34d211fd9310ac1f2ff5466e92a086d9f81fcca1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hodgkin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:47:26 compute-0 podman[261798]: 2025-10-11 04:47:26.0610978 +0000 UTC m=+0.178367420 container attach 5279c99cda5426b7e3d9f2c34d211fd9310ac1f2ff5466e92a086d9f81fcca1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:47:26 compute-0 angry_hodgkin[261815]: 167 167
Oct 11 04:47:26 compute-0 systemd[1]: libpod-5279c99cda5426b7e3d9f2c34d211fd9310ac1f2ff5466e92a086d9f81fcca1a.scope: Deactivated successfully.
Oct 11 04:47:26 compute-0 podman[261798]: 2025-10-11 04:47:26.065079921 +0000 UTC m=+0.182349521 container died 5279c99cda5426b7e3d9f2c34d211fd9310ac1f2ff5466e92a086d9f81fcca1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hodgkin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:47:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e1f8876c2059ab742a5afbc69b00c54316510b0bcbe36367ea333085178124a-merged.mount: Deactivated successfully.
Oct 11 04:47:26 compute-0 podman[261798]: 2025-10-11 04:47:26.103068313 +0000 UTC m=+0.220337883 container remove 5279c99cda5426b7e3d9f2c34d211fd9310ac1f2ff5466e92a086d9f81fcca1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hodgkin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:47:26 compute-0 systemd[1]: libpod-conmon-5279c99cda5426b7e3d9f2c34d211fd9310ac1f2ff5466e92a086d9f81fcca1a.scope: Deactivated successfully.
Oct 11 04:47:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:47:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:47:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:47:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:47:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:47:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:47:26 compute-0 podman[261840]: 2025-10-11 04:47:26.309250906 +0000 UTC m=+0.039834681 container create 5ef09f95b5ea515e106adeb967a541841071f85b6aa76fe07e98a35f8c3a638a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:47:26 compute-0 systemd[1]: Started libpod-conmon-5ef09f95b5ea515e106adeb967a541841071f85b6aa76fe07e98a35f8c3a638a.scope.
Oct 11 04:47:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8a9c46288d18d324fffa4f5b9a3ac653a9b203038b5d9b51f80d559bf63c21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8a9c46288d18d324fffa4f5b9a3ac653a9b203038b5d9b51f80d559bf63c21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8a9c46288d18d324fffa4f5b9a3ac653a9b203038b5d9b51f80d559bf63c21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8a9c46288d18d324fffa4f5b9a3ac653a9b203038b5d9b51f80d559bf63c21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:26 compute-0 podman[261840]: 2025-10-11 04:47:26.293686971 +0000 UTC m=+0.024270736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:47:26 compute-0 podman[261840]: 2025-10-11 04:47:26.399761059 +0000 UTC m=+0.130344814 container init 5ef09f95b5ea515e106adeb967a541841071f85b6aa76fe07e98a35f8c3a638a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gauss, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:47:26 compute-0 podman[261840]: 2025-10-11 04:47:26.407391532 +0000 UTC m=+0.137975277 container start 5ef09f95b5ea515e106adeb967a541841071f85b6aa76fe07e98a35f8c3a638a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:47:26 compute-0 podman[261840]: 2025-10-11 04:47:26.413185179 +0000 UTC m=+0.143768964 container attach 5ef09f95b5ea515e106adeb967a541841071f85b6aa76fe07e98a35f8c3a638a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:47:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:26 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:37104 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:26 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/4230892739' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 11 04:47:27 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:37168 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]: {
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:         "osd_id": 1,
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:         "type": "bluestore"
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:     },
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:         "osd_id": 0,
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:         "type": "bluestore"
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:     },
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:         "osd_id": 2,
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:         "type": "bluestore"
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]:     }
Oct 11 04:47:27 compute-0 pedantic_gauss[261857]: }
Oct 11 04:47:27 compute-0 systemd[1]: libpod-5ef09f95b5ea515e106adeb967a541841071f85b6aa76fe07e98a35f8c3a638a.scope: Deactivated successfully.
Oct 11 04:47:27 compute-0 systemd[1]: libpod-5ef09f95b5ea515e106adeb967a541841071f85b6aa76fe07e98a35f8c3a638a.scope: Consumed 1.077s CPU time.
Oct 11 04:47:27 compute-0 podman[261890]: 2025-10-11 04:47:27.539988026 +0000 UTC m=+0.043095343 container died 5ef09f95b5ea515e106adeb967a541841071f85b6aa76fe07e98a35f8c3a638a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:47:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d8a9c46288d18d324fffa4f5b9a3ac653a9b203038b5d9b51f80d559bf63c21-merged.mount: Deactivated successfully.
Oct 11 04:47:27 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:37234 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:27 compute-0 podman[261890]: 2025-10-11 04:47:27.602400807 +0000 UTC m=+0.105508114 container remove 5ef09f95b5ea515e106adeb967a541841071f85b6aa76fe07e98a35f8c3a638a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gauss, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:47:27 compute-0 systemd[1]: libpod-conmon-5ef09f95b5ea515e106adeb967a541841071f85b6aa76fe07e98a35f8c3a638a.scope: Deactivated successfully.
Oct 11 04:47:27 compute-0 sudo[261735]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:27 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:47:27 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:47:27 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:47:27 compute-0 ceph-mon[74243]: from='client.14343 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 11 04:47:27 compute-0 ceph-mon[74243]: pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:27 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:47:27 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 5e877b79-93c6-41fd-95a0-4d136545a2b6 does not exist
Oct 11 04:47:27 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 2c9faf26-6cb6-4d5f-bcac-209e2eae2fd1 does not exist
Oct 11 04:47:27 compute-0 sudo[261905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:47:27 compute-0 sudo[261905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:27 compute-0 sudo[261905]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:27 compute-0 sudo[261930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:47:27 compute-0 sudo[261930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:47:27 compute-0 sudo[261930]: pam_unix(sudo:session): session closed for user root
Oct 11 04:47:28 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:37286 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:28 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:47:28 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:47:28 compute-0 ceph-mon[74243]: pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:28 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:37370 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:29 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:37472 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:47:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:30 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:37570 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:31 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:37810 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:31 compute-0 ceph-mon[74243]: pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:31 compute-0 sshd[188896]: drop connection #0 from [221.159.21.170]:37894 on [38.102.83.148]:22 penalty: failed authentication
Oct 11 04:47:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:33 compute-0 ceph-mon[74243]: pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:47:34.644556) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158054644606, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1490, "num_deletes": 251, "total_data_size": 2395215, "memory_usage": 2435872, "flush_reason": "Manual Compaction"}
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158054664869, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2340844, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14797, "largest_seqno": 16286, "table_properties": {"data_size": 2333847, "index_size": 4068, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14254, "raw_average_key_size": 19, "raw_value_size": 2319895, "raw_average_value_size": 3208, "num_data_blocks": 186, "num_entries": 723, "num_filter_entries": 723, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760157900, "oldest_key_time": 1760157900, "file_creation_time": 1760158054, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 20384 microseconds, and 9420 cpu microseconds.
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:47:34.664937) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2340844 bytes OK
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:47:34.664966) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:47:34.666655) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:47:34.666678) EVENT_LOG_v1 {"time_micros": 1760158054666669, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:47:34.666699) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2388662, prev total WAL file size 2388662, number of live WAL files 2.
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:47:34.667984) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2285KB)], [35(7055KB)]
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158054668051, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9565915, "oldest_snapshot_seqno": -1}
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3978 keys, 7780024 bytes, temperature: kUnknown
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158054713740, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7780024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7751070, "index_size": 17900, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 97180, "raw_average_key_size": 24, "raw_value_size": 7676745, "raw_average_value_size": 1929, "num_data_blocks": 758, "num_entries": 3978, "num_filter_entries": 3978, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156706, "oldest_key_time": 0, "file_creation_time": 1760158054, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:47:34.714068) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7780024 bytes
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:47:34.715705) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.9 rd, 169.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 6.9 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.4) write-amplify(3.3) OK, records in: 4492, records dropped: 514 output_compression: NoCompression
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:47:34.715735) EVENT_LOG_v1 {"time_micros": 1760158054715720, "job": 16, "event": "compaction_finished", "compaction_time_micros": 45783, "compaction_time_cpu_micros": 31390, "output_level": 6, "num_output_files": 1, "total_output_size": 7780024, "num_input_records": 4492, "num_output_records": 3978, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158054716595, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158054718820, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:47:34.667869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:47:34.718917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:47:34.718925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:47:34.718928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:47:34.718931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:47:34 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:47:34.718934) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:47:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:47:35 compute-0 ceph-mon[74243]: pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:37 compute-0 ceph-mon[74243]: pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:39 compute-0 ceph-mon[74243]: pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:47:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:41 compute-0 ceph-mon[74243]: pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:43 compute-0 podman[261956]: 2025-10-11 04:47:43.442285528 +0000 UTC m=+0.092217118 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid)
Oct 11 04:47:43 compute-0 podman[261957]: 2025-10-11 04:47:43.452161938 +0000 UTC m=+0.090123344 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 11 04:47:43 compute-0 podman[261955]: 2025-10-11 04:47:43.486723904 +0000 UTC m=+0.136304005 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 04:47:43 compute-0 ceph-mon[74243]: pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:44 compute-0 ceph-mon[74243]: pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:47:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:47 compute-0 ceph-mon[74243]: pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct 11 04:47:49 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2765352505' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 11 04:47:49 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14345 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 11 04:47:49 compute-0 ceph-mgr[74542]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 11 04:47:49 compute-0 ceph-mgr[74542]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 11 04:47:49 compute-0 ceph-mon[74243]: pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:49 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/2765352505' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 11 04:47:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:47:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:50 compute-0 ceph-mon[74243]: from='client.14345 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 11 04:47:51 compute-0 ceph-mon[74243]: pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:52 compute-0 ceph-mon[74243]: pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:54 compute-0 podman[262020]: 2025-10-11 04:47:54.428731061 +0000 UTC m=+0.078179282 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 04:47:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:47:55 compute-0 ceph-mon[74243]: pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:47:56
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'images', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr']
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:47:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct 11 04:47:57 compute-0 ceph-mon[74243]: pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct 11 04:47:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 11 04:47:59 compute-0 ceph-mon[74243]: pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 11 04:48:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:48:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 11 04:48:01 compute-0 ceph-mon[74243]: pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 11 04:48:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:48:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:48:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2121341010' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:48:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:48:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2121341010' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:48:03 compute-0 ceph-mon[74243]: pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:48:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/2121341010' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:48:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/2121341010' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:48:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:48:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:48:05 compute-0 ceph-mon[74243]: pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:48:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:48:07 compute-0 ceph-mon[74243]: pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:48:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct 11 04:48:08 compute-0 ceph-mon[74243]: pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct 11 04:48:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:48:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Oct 11 04:48:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:48:11.012 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:48:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:48:11.012 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:48:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:48:11.012 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:48:11 compute-0 ceph-mon[74243]: pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Oct 11 04:48:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Oct 11 04:48:12 compute-0 nova_compute[259400]: 2025-10-11 04:48:12.619 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:48:12 compute-0 nova_compute[259400]: 2025-10-11 04:48:12.646 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:48:12 compute-0 nova_compute[259400]: 2025-10-11 04:48:12.647 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:48:12 compute-0 nova_compute[259400]: 2025-10-11 04:48:12.647 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:48:12 compute-0 nova_compute[259400]: 2025-10-11 04:48:12.664 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:48:12 compute-0 nova_compute[259400]: 2025-10-11 04:48:12.665 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:48:12 compute-0 nova_compute[259400]: 2025-10-11 04:48:12.665 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:48:12 compute-0 nova_compute[259400]: 2025-10-11 04:48:12.665 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:48:12 compute-0 nova_compute[259400]: 2025-10-11 04:48:12.665 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:48:12 compute-0 nova_compute[259400]: 2025-10-11 04:48:12.666 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:48:12 compute-0 nova_compute[259400]: 2025-10-11 04:48:12.666 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:48:12 compute-0 nova_compute[259400]: 2025-10-11 04:48:12.704 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:48:12 compute-0 nova_compute[259400]: 2025-10-11 04:48:12.704 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:48:12 compute-0 nova_compute[259400]: 2025-10-11 04:48:12.704 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:48:12 compute-0 nova_compute[259400]: 2025-10-11 04:48:12.705 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:48:12 compute-0 nova_compute[259400]: 2025-10-11 04:48:12.705 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:48:13 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:48:13 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2490483740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:48:13 compute-0 nova_compute[259400]: 2025-10-11 04:48:13.133 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:48:13 compute-0 nova_compute[259400]: 2025-10-11 04:48:13.327 2 WARNING nova.virt.libvirt.driver [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:48:13 compute-0 nova_compute[259400]: 2025-10-11 04:48:13.329 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5175MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:48:13 compute-0 nova_compute[259400]: 2025-10-11 04:48:13.330 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:48:13 compute-0 nova_compute[259400]: 2025-10-11 04:48:13.330 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:48:13 compute-0 nova_compute[259400]: 2025-10-11 04:48:13.432 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:48:13 compute-0 nova_compute[259400]: 2025-10-11 04:48:13.433 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:48:13 compute-0 nova_compute[259400]: 2025-10-11 04:48:13.459 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:48:13 compute-0 ceph-mon[74243]: pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Oct 11 04:48:13 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2490483740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:48:13 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:48:13 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1907957858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:48:13 compute-0 nova_compute[259400]: 2025-10-11 04:48:13.860 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:48:13 compute-0 nova_compute[259400]: 2025-10-11 04:48:13.866 2 DEBUG nova.compute.provider_tree [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f05a244-23b6-4149-9b5a-a525e5860d18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:48:13 compute-0 nova_compute[259400]: 2025-10-11 04:48:13.887 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:48:13 compute-0 nova_compute[259400]: 2025-10-11 04:48:13.888 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:48:13 compute-0 nova_compute[259400]: 2025-10-11 04:48:13.889 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:48:14 compute-0 nova_compute[259400]: 2025-10-11 04:48:14.420 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:48:14 compute-0 nova_compute[259400]: 2025-10-11 04:48:14.420 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:48:14 compute-0 nova_compute[259400]: 2025-10-11 04:48:14.420 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:48:14 compute-0 podman[262085]: 2025-10-11 04:48:14.423597655 +0000 UTC m=+0.065507260 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:48:14 compute-0 podman[262084]: 2025-10-11 04:48:14.43840022 +0000 UTC m=+0.077339990 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 04:48:14 compute-0 podman[262083]: 2025-10-11 04:48:14.459366172 +0000 UTC m=+0.112538793 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 11 04:48:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:14 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1907957858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:48:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:48:15 compute-0 ceph-mon[74243]: pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:17 compute-0 ceph-mon[74243]: pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:19 compute-0 ceph-mon[74243]: pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:48:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:20 compute-0 ceph-mon[74243]: pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:23 compute-0 ceph-mon[74243]: pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:48:25 compute-0 podman[262145]: 2025-10-11 04:48:25.421420277 +0000 UTC m=+0.072951841 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 04:48:25 compute-0 ceph-mon[74243]: pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:48:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:48:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:48:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:48:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:48:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:48:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:27 compute-0 ceph-mon[74243]: pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:27 compute-0 sudo[262166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:48:27 compute-0 sudo[262166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:48:27 compute-0 sudo[262166]: pam_unix(sudo:session): session closed for user root
Oct 11 04:48:28 compute-0 sudo[262191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:48:28 compute-0 sudo[262191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:48:28 compute-0 sudo[262191]: pam_unix(sudo:session): session closed for user root
Oct 11 04:48:28 compute-0 sudo[262216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:48:28 compute-0 sudo[262216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:48:28 compute-0 sudo[262216]: pam_unix(sudo:session): session closed for user root
Oct 11 04:48:28 compute-0 sudo[262241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:48:28 compute-0 sudo[262241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:48:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:28 compute-0 sudo[262241]: pam_unix(sudo:session): session closed for user root
Oct 11 04:48:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 04:48:28 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:48:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:48:28 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:48:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:48:28 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:48:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:48:28 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:48:28 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 9bd2bb91-8238-407a-a48f-0197d0c5ee3e does not exist
Oct 11 04:48:28 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 854b7eda-0de8-406d-b791-1695abad6eaf does not exist
Oct 11 04:48:28 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 8d0b6290-7b98-4092-9e16-d24635555b70 does not exist
Oct 11 04:48:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:48:28 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:48:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:48:28 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:48:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:48:28 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:48:29 compute-0 sudo[262297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:48:29 compute-0 sudo[262297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:48:29 compute-0 sudo[262297]: pam_unix(sudo:session): session closed for user root
Oct 11 04:48:29 compute-0 sudo[262322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:48:29 compute-0 sudo[262322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:48:29 compute-0 sudo[262322]: pam_unix(sudo:session): session closed for user root
Oct 11 04:48:29 compute-0 sudo[262347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:48:29 compute-0 sudo[262347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:48:29 compute-0 sudo[262347]: pam_unix(sudo:session): session closed for user root
Oct 11 04:48:29 compute-0 sudo[262372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:48:29 compute-0 sudo[262372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:48:29 compute-0 ceph-mon[74243]: pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:29 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:48:29 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:48:29 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:48:29 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:48:29 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:48:29 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:48:29 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:48:29 compute-0 podman[262438]: 2025-10-11 04:48:29.682557626 +0000 UTC m=+0.066440298 container create 6de65d73a8cf7da35716ada1304ce0e593cbdec87ac58d4134af12676c44080b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_tesla, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:48:29 compute-0 systemd[1]: Started libpod-conmon-6de65d73a8cf7da35716ada1304ce0e593cbdec87ac58d4134af12676c44080b.scope.
Oct 11 04:48:29 compute-0 podman[262438]: 2025-10-11 04:48:29.653652521 +0000 UTC m=+0.037535223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:48:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:48:29 compute-0 podman[262438]: 2025-10-11 04:48:29.79868204 +0000 UTC m=+0.182564762 container init 6de65d73a8cf7da35716ada1304ce0e593cbdec87ac58d4134af12676c44080b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:48:29 compute-0 podman[262438]: 2025-10-11 04:48:29.812764803 +0000 UTC m=+0.196647485 container start 6de65d73a8cf7da35716ada1304ce0e593cbdec87ac58d4134af12676c44080b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_tesla, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:48:29 compute-0 podman[262438]: 2025-10-11 04:48:29.817230375 +0000 UTC m=+0.201113097 container attach 6de65d73a8cf7da35716ada1304ce0e593cbdec87ac58d4134af12676c44080b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:48:29 compute-0 frosty_tesla[262454]: 167 167
Oct 11 04:48:29 compute-0 systemd[1]: libpod-6de65d73a8cf7da35716ada1304ce0e593cbdec87ac58d4134af12676c44080b.scope: Deactivated successfully.
Oct 11 04:48:29 compute-0 podman[262438]: 2025-10-11 04:48:29.823697147 +0000 UTC m=+0.207579799 container died 6de65d73a8cf7da35716ada1304ce0e593cbdec87ac58d4134af12676c44080b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_tesla, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-319614686bf30bf01566d8d219f333f6030a7d8cfee6a57eb7e7363d12f0646a-merged.mount: Deactivated successfully.
Oct 11 04:48:29 compute-0 podman[262438]: 2025-10-11 04:48:29.867735382 +0000 UTC m=+0.251618034 container remove 6de65d73a8cf7da35716ada1304ce0e593cbdec87ac58d4134af12676c44080b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_tesla, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 04:48:29 compute-0 systemd[1]: libpod-conmon-6de65d73a8cf7da35716ada1304ce0e593cbdec87ac58d4134af12676c44080b.scope: Deactivated successfully.
Oct 11 04:48:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:48:30 compute-0 podman[262478]: 2025-10-11 04:48:30.130483394 +0000 UTC m=+0.070983722 container create a0c5e71e6aff97060fe4145d9fee98650ba6ec5b259c0323a99c033c3fbcd4df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:48:30 compute-0 systemd[1]: Started libpod-conmon-a0c5e71e6aff97060fe4145d9fee98650ba6ec5b259c0323a99c033c3fbcd4df.scope.
Oct 11 04:48:30 compute-0 podman[262478]: 2025-10-11 04:48:30.100210565 +0000 UTC m=+0.040710933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:48:30 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c28ecc12f5406adebf0c92787db0bd7184fe8b60c3da618c4b2b263206d7dba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c28ecc12f5406adebf0c92787db0bd7184fe8b60c3da618c4b2b263206d7dba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c28ecc12f5406adebf0c92787db0bd7184fe8b60c3da618c4b2b263206d7dba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c28ecc12f5406adebf0c92787db0bd7184fe8b60c3da618c4b2b263206d7dba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c28ecc12f5406adebf0c92787db0bd7184fe8b60c3da618c4b2b263206d7dba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:30 compute-0 podman[262478]: 2025-10-11 04:48:30.249759647 +0000 UTC m=+0.190260005 container init a0c5e71e6aff97060fe4145d9fee98650ba6ec5b259c0323a99c033c3fbcd4df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:48:30 compute-0 podman[262478]: 2025-10-11 04:48:30.262073636 +0000 UTC m=+0.202573954 container start a0c5e71e6aff97060fe4145d9fee98650ba6ec5b259c0323a99c033c3fbcd4df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:48:30 compute-0 podman[262478]: 2025-10-11 04:48:30.266213729 +0000 UTC m=+0.206714097 container attach a0c5e71e6aff97060fe4145d9fee98650ba6ec5b259c0323a99c033c3fbcd4df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_johnson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:48:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:31 compute-0 focused_johnson[262494]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:48:31 compute-0 focused_johnson[262494]: --> relative data size: 1.0
Oct 11 04:48:31 compute-0 focused_johnson[262494]: --> All data devices are unavailable
Oct 11 04:48:31 compute-0 systemd[1]: libpod-a0c5e71e6aff97060fe4145d9fee98650ba6ec5b259c0323a99c033c3fbcd4df.scope: Deactivated successfully.
Oct 11 04:48:31 compute-0 podman[262478]: 2025-10-11 04:48:31.40860374 +0000 UTC m=+1.349104098 container died a0c5e71e6aff97060fe4145d9fee98650ba6ec5b259c0323a99c033c3fbcd4df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_johnson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:48:31 compute-0 systemd[1]: libpod-a0c5e71e6aff97060fe4145d9fee98650ba6ec5b259c0323a99c033c3fbcd4df.scope: Consumed 1.106s CPU time.
Oct 11 04:48:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c28ecc12f5406adebf0c92787db0bd7184fe8b60c3da618c4b2b263206d7dba-merged.mount: Deactivated successfully.
Oct 11 04:48:31 compute-0 podman[262478]: 2025-10-11 04:48:31.481646393 +0000 UTC m=+1.422146691 container remove a0c5e71e6aff97060fe4145d9fee98650ba6ec5b259c0323a99c033c3fbcd4df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_johnson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:48:31 compute-0 systemd[1]: libpod-conmon-a0c5e71e6aff97060fe4145d9fee98650ba6ec5b259c0323a99c033c3fbcd4df.scope: Deactivated successfully.
Oct 11 04:48:31 compute-0 sudo[262372]: pam_unix(sudo:session): session closed for user root
Oct 11 04:48:31 compute-0 sudo[262534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:48:31 compute-0 sudo[262534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:48:31 compute-0 sudo[262534]: pam_unix(sudo:session): session closed for user root
Oct 11 04:48:31 compute-0 ceph-mon[74243]: pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:31 compute-0 sudo[262559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:48:31 compute-0 sudo[262559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:48:31 compute-0 sudo[262559]: pam_unix(sudo:session): session closed for user root
Oct 11 04:48:31 compute-0 sudo[262584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:48:31 compute-0 sudo[262584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:48:31 compute-0 sudo[262584]: pam_unix(sudo:session): session closed for user root
Oct 11 04:48:31 compute-0 sudo[262609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:48:31 compute-0 sudo[262609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:48:32 compute-0 podman[262673]: 2025-10-11 04:48:32.230555582 +0000 UTC m=+0.052919528 container create 38b677883618f5f80b5cc8c1cbfec652f9c98fedcec84c436a5c6768d1c4607b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:48:32 compute-0 systemd[1]: Started libpod-conmon-38b677883618f5f80b5cc8c1cbfec652f9c98fedcec84c436a5c6768d1c4607b.scope.
Oct 11 04:48:32 compute-0 podman[262673]: 2025-10-11 04:48:32.205994586 +0000 UTC m=+0.028358582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:48:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:48:32 compute-0 podman[262673]: 2025-10-11 04:48:32.328290224 +0000 UTC m=+0.150654210 container init 38b677883618f5f80b5cc8c1cbfec652f9c98fedcec84c436a5c6768d1c4607b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_sanderson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:48:32 compute-0 podman[262673]: 2025-10-11 04:48:32.340508031 +0000 UTC m=+0.162871967 container start 38b677883618f5f80b5cc8c1cbfec652f9c98fedcec84c436a5c6768d1c4607b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:48:32 compute-0 podman[262673]: 2025-10-11 04:48:32.344779438 +0000 UTC m=+0.167143434 container attach 38b677883618f5f80b5cc8c1cbfec652f9c98fedcec84c436a5c6768d1c4607b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:48:32 compute-0 optimistic_sanderson[262690]: 167 167
Oct 11 04:48:32 compute-0 systemd[1]: libpod-38b677883618f5f80b5cc8c1cbfec652f9c98fedcec84c436a5c6768d1c4607b.scope: Deactivated successfully.
Oct 11 04:48:32 compute-0 podman[262673]: 2025-10-11 04:48:32.348225625 +0000 UTC m=+0.170589571 container died 38b677883618f5f80b5cc8c1cbfec652f9c98fedcec84c436a5c6768d1c4607b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_sanderson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:48:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-58de2f58c650b5e34b08877bc4dbd19a2b304f09639c6d621df7938e2bf2c6f1-merged.mount: Deactivated successfully.
Oct 11 04:48:32 compute-0 podman[262673]: 2025-10-11 04:48:32.399574703 +0000 UTC m=+0.221938639 container remove 38b677883618f5f80b5cc8c1cbfec652f9c98fedcec84c436a5c6768d1c4607b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_sanderson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:48:32 compute-0 systemd[1]: libpod-conmon-38b677883618f5f80b5cc8c1cbfec652f9c98fedcec84c436a5c6768d1c4607b.scope: Deactivated successfully.
Oct 11 04:48:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:32 compute-0 podman[262714]: 2025-10-11 04:48:32.644281962 +0000 UTC m=+0.064214352 container create 2e63d7160a830b32b45e1d91af7c8b5b52a141a4839e82dc7d782a5ade6205f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_taussig, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:48:32 compute-0 systemd[1]: Started libpod-conmon-2e63d7160a830b32b45e1d91af7c8b5b52a141a4839e82dc7d782a5ade6205f2.scope.
Oct 11 04:48:32 compute-0 podman[262714]: 2025-10-11 04:48:32.618203778 +0000 UTC m=+0.038136228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:48:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2179e09bbffa264f25eb5367372ccaa1b55ae984fec983eaef15d80a2d352caf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2179e09bbffa264f25eb5367372ccaa1b55ae984fec983eaef15d80a2d352caf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2179e09bbffa264f25eb5367372ccaa1b55ae984fec983eaef15d80a2d352caf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2179e09bbffa264f25eb5367372ccaa1b55ae984fec983eaef15d80a2d352caf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:32 compute-0 podman[262714]: 2025-10-11 04:48:32.755999465 +0000 UTC m=+0.175931915 container init 2e63d7160a830b32b45e1d91af7c8b5b52a141a4839e82dc7d782a5ade6205f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_taussig, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:48:32 compute-0 podman[262714]: 2025-10-11 04:48:32.769430722 +0000 UTC m=+0.189363112 container start 2e63d7160a830b32b45e1d91af7c8b5b52a141a4839e82dc7d782a5ade6205f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_taussig, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 04:48:32 compute-0 podman[262714]: 2025-10-11 04:48:32.773575416 +0000 UTC m=+0.193507816 container attach 2e63d7160a830b32b45e1d91af7c8b5b52a141a4839e82dc7d782a5ade6205f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:48:33 compute-0 confident_taussig[262731]: {
Oct 11 04:48:33 compute-0 confident_taussig[262731]:     "0": [
Oct 11 04:48:33 compute-0 confident_taussig[262731]:         {
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "devices": [
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "/dev/loop3"
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             ],
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "lv_name": "ceph_lv0",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "lv_size": "21470642176",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "name": "ceph_lv0",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "tags": {
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.cluster_name": "ceph",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.crush_device_class": "",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.encrypted": "0",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.osd_id": "0",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.type": "block",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.vdo": "0"
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             },
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "type": "block",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "vg_name": "ceph_vg0"
Oct 11 04:48:33 compute-0 confident_taussig[262731]:         }
Oct 11 04:48:33 compute-0 confident_taussig[262731]:     ],
Oct 11 04:48:33 compute-0 confident_taussig[262731]:     "1": [
Oct 11 04:48:33 compute-0 confident_taussig[262731]:         {
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "devices": [
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "/dev/loop4"
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             ],
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "lv_name": "ceph_lv1",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "lv_size": "21470642176",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "name": "ceph_lv1",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "tags": {
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.cluster_name": "ceph",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.crush_device_class": "",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.encrypted": "0",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.osd_id": "1",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.type": "block",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.vdo": "0"
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             },
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "type": "block",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "vg_name": "ceph_vg1"
Oct 11 04:48:33 compute-0 confident_taussig[262731]:         }
Oct 11 04:48:33 compute-0 confident_taussig[262731]:     ],
Oct 11 04:48:33 compute-0 confident_taussig[262731]:     "2": [
Oct 11 04:48:33 compute-0 confident_taussig[262731]:         {
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "devices": [
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "/dev/loop5"
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             ],
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "lv_name": "ceph_lv2",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "lv_size": "21470642176",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "name": "ceph_lv2",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "tags": {
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.cluster_name": "ceph",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.crush_device_class": "",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.encrypted": "0",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.osd_id": "2",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.type": "block",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:                 "ceph.vdo": "0"
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             },
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "type": "block",
Oct 11 04:48:33 compute-0 confident_taussig[262731]:             "vg_name": "ceph_vg2"
Oct 11 04:48:33 compute-0 confident_taussig[262731]:         }
Oct 11 04:48:33 compute-0 confident_taussig[262731]:     ]
Oct 11 04:48:33 compute-0 confident_taussig[262731]: }
Oct 11 04:48:33 compute-0 systemd[1]: libpod-2e63d7160a830b32b45e1d91af7c8b5b52a141a4839e82dc7d782a5ade6205f2.scope: Deactivated successfully.
Oct 11 04:48:33 compute-0 podman[262714]: 2025-10-11 04:48:33.524895777 +0000 UTC m=+0.944828217 container died 2e63d7160a830b32b45e1d91af7c8b5b52a141a4839e82dc7d782a5ade6205f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_taussig, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:48:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2179e09bbffa264f25eb5367372ccaa1b55ae984fec983eaef15d80a2d352caf-merged.mount: Deactivated successfully.
Oct 11 04:48:33 compute-0 podman[262714]: 2025-10-11 04:48:33.597290713 +0000 UTC m=+1.017223073 container remove 2e63d7160a830b32b45e1d91af7c8b5b52a141a4839e82dc7d782a5ade6205f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_taussig, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:48:33 compute-0 systemd[1]: libpod-conmon-2e63d7160a830b32b45e1d91af7c8b5b52a141a4839e82dc7d782a5ade6205f2.scope: Deactivated successfully.
Oct 11 04:48:33 compute-0 sudo[262609]: pam_unix(sudo:session): session closed for user root
Oct 11 04:48:33 compute-0 ceph-mon[74243]: pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:33 compute-0 sudo[262754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:48:33 compute-0 sudo[262754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:48:33 compute-0 sudo[262754]: pam_unix(sudo:session): session closed for user root
Oct 11 04:48:33 compute-0 sudo[262779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:48:33 compute-0 sudo[262779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:48:33 compute-0 sudo[262779]: pam_unix(sudo:session): session closed for user root
Oct 11 04:48:33 compute-0 sudo[262804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:48:33 compute-0 sudo[262804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:48:33 compute-0 sudo[262804]: pam_unix(sudo:session): session closed for user root
Oct 11 04:48:33 compute-0 sudo[262829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:48:33 compute-0 sudo[262829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:48:34 compute-0 podman[262894]: 2025-10-11 04:48:34.380527433 +0000 UTC m=+0.075472395 container create 09fe4e18cdbb743553d126f9a8a58249b351fa0bdae175b9bf0ba1475b5090d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_pike, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:48:34 compute-0 systemd[1]: Started libpod-conmon-09fe4e18cdbb743553d126f9a8a58249b351fa0bdae175b9bf0ba1475b5090d9.scope.
Oct 11 04:48:34 compute-0 podman[262894]: 2025-10-11 04:48:34.352145511 +0000 UTC m=+0.047090563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:48:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:48:34 compute-0 podman[262894]: 2025-10-11 04:48:34.470386077 +0000 UTC m=+0.165331079 container init 09fe4e18cdbb743553d126f9a8a58249b351fa0bdae175b9bf0ba1475b5090d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_pike, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:48:34 compute-0 podman[262894]: 2025-10-11 04:48:34.481218489 +0000 UTC m=+0.176163451 container start 09fe4e18cdbb743553d126f9a8a58249b351fa0bdae175b9bf0ba1475b5090d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:48:34 compute-0 podman[262894]: 2025-10-11 04:48:34.484516152 +0000 UTC m=+0.179461144 container attach 09fe4e18cdbb743553d126f9a8a58249b351fa0bdae175b9bf0ba1475b5090d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:48:34 compute-0 condescending_pike[262910]: 167 167
Oct 11 04:48:34 compute-0 systemd[1]: libpod-09fe4e18cdbb743553d126f9a8a58249b351fa0bdae175b9bf0ba1475b5090d9.scope: Deactivated successfully.
Oct 11 04:48:34 compute-0 podman[262894]: 2025-10-11 04:48:34.487452845 +0000 UTC m=+0.182397827 container died 09fe4e18cdbb743553d126f9a8a58249b351fa0bdae175b9bf0ba1475b5090d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_pike, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 04:48:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-26eb7eb72d6789f4905a42e127247290e46c507ec91c63725afa40b48c750838-merged.mount: Deactivated successfully.
Oct 11 04:48:34 compute-0 podman[262894]: 2025-10-11 04:48:34.530097205 +0000 UTC m=+0.225042197 container remove 09fe4e18cdbb743553d126f9a8a58249b351fa0bdae175b9bf0ba1475b5090d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:48:34 compute-0 systemd[1]: libpod-conmon-09fe4e18cdbb743553d126f9a8a58249b351fa0bdae175b9bf0ba1475b5090d9.scope: Deactivated successfully.
Oct 11 04:48:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:34 compute-0 ceph-mon[74243]: pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:34 compute-0 podman[262933]: 2025-10-11 04:48:34.744196457 +0000 UTC m=+0.047947644 container create 47820b8c9c9e9d6849fc2e42743a8317a4314848d01b77299045baf6ebe91469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hamilton, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:48:34 compute-0 systemd[1]: Started libpod-conmon-47820b8c9c9e9d6849fc2e42743a8317a4314848d01b77299045baf6ebe91469.scope.
Oct 11 04:48:34 compute-0 podman[262933]: 2025-10-11 04:48:34.72718139 +0000 UTC m=+0.030932597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:48:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/669c177b30b92342d92fbba379440293b937da0fbbef65aced2f8af45b64a745/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/669c177b30b92342d92fbba379440293b937da0fbbef65aced2f8af45b64a745/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/669c177b30b92342d92fbba379440293b937da0fbbef65aced2f8af45b64a745/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/669c177b30b92342d92fbba379440293b937da0fbbef65aced2f8af45b64a745/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:34 compute-0 podman[262933]: 2025-10-11 04:48:34.85514113 +0000 UTC m=+0.158892367 container init 47820b8c9c9e9d6849fc2e42743a8317a4314848d01b77299045baf6ebe91469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:48:34 compute-0 podman[262933]: 2025-10-11 04:48:34.865649244 +0000 UTC m=+0.169400461 container start 47820b8c9c9e9d6849fc2e42743a8317a4314848d01b77299045baf6ebe91469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hamilton, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:48:34 compute-0 podman[262933]: 2025-10-11 04:48:34.870151997 +0000 UTC m=+0.173903184 container attach 47820b8c9c9e9d6849fc2e42743a8317a4314848d01b77299045baf6ebe91469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:48:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]: {
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:         "osd_id": 1,
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:         "type": "bluestore"
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:     },
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:         "osd_id": 0,
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:         "type": "bluestore"
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:     },
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:         "osd_id": 2,
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:         "type": "bluestore"
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]:     }
Oct 11 04:48:35 compute-0 reverent_hamilton[262950]: }
Oct 11 04:48:35 compute-0 systemd[1]: libpod-47820b8c9c9e9d6849fc2e42743a8317a4314848d01b77299045baf6ebe91469.scope: Deactivated successfully.
Oct 11 04:48:35 compute-0 podman[262933]: 2025-10-11 04:48:35.805875924 +0000 UTC m=+1.109627131 container died 47820b8c9c9e9d6849fc2e42743a8317a4314848d01b77299045baf6ebe91469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hamilton, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:48:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-669c177b30b92342d92fbba379440293b937da0fbbef65aced2f8af45b64a745-merged.mount: Deactivated successfully.
Oct 11 04:48:35 compute-0 podman[262933]: 2025-10-11 04:48:35.882225919 +0000 UTC m=+1.185977126 container remove 47820b8c9c9e9d6849fc2e42743a8317a4314848d01b77299045baf6ebe91469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:48:35 compute-0 systemd[1]: libpod-conmon-47820b8c9c9e9d6849fc2e42743a8317a4314848d01b77299045baf6ebe91469.scope: Deactivated successfully.
Oct 11 04:48:35 compute-0 sudo[262829]: pam_unix(sudo:session): session closed for user root
Oct 11 04:48:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:48:35 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:48:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:48:35 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:48:35 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 0c859391-2a57-48ac-afc0-685680215a97 does not exist
Oct 11 04:48:35 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 306812ba-c09e-452d-843c-ca2460fc65da does not exist
Oct 11 04:48:36 compute-0 sudo[262997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:48:36 compute-0 sudo[262997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:48:36 compute-0 sudo[262997]: pam_unix(sudo:session): session closed for user root
Oct 11 04:48:36 compute-0 sudo[263022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:48:36 compute-0 sudo[263022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:48:36 compute-0 sudo[263022]: pam_unix(sudo:session): session closed for user root
Oct 11 04:48:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:36 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:48:36 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:48:36 compute-0 ceph-mon[74243]: pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:39 compute-0 ceph-mon[74243]: pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:48:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:41 compute-0 ceph-mon[74243]: pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:42 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:48:42.941 161813 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:88:88', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '96:43:b2:79:d5:95'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:48:42 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:48:42.944 161813 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:48:42 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:48:42.945 161813 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2ff6420e-86e1-487c-bef9-adac80b75ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:48:43 compute-0 ceph-mon[74243]: pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:48:45 compute-0 podman[263048]: 2025-10-11 04:48:45.437820879 +0000 UTC m=+0.082642235 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:48:45 compute-0 podman[263049]: 2025-10-11 04:48:45.485674508 +0000 UTC m=+0.126025472 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd)
Oct 11 04:48:45 compute-0 podman[263047]: 2025-10-11 04:48:45.50723216 +0000 UTC m=+0.155427841 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:48:45 compute-0 ceph-mon[74243]: pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:47 compute-0 ceph-mon[74243]: pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:48 compute-0 ceph-mon[74243]: pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:48:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:51 compute-0 ceph-mon[74243]: pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:53 compute-0 ceph-mon[74243]: pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:54 compute-0 ceph-mon[74243]: pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:48:56
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'images', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'vms']
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:48:56 compute-0 podman[263114]: 2025-10-11 04:48:56.443515332 +0000 UTC m=+0.085275070 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 11 04:48:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:57 compute-0 ceph-mon[74243]: pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:48:59 compute-0 ceph-mon[74243]: pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:49:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:01 compute-0 ceph-mon[74243]: pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:49:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/298514843' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:49:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:49:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/298514843' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:49:03 compute-0 ceph-mon[74243]: pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/298514843' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:49:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/298514843' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:49:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:49:05 compute-0 ceph-mon[74243]: pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:49:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:07 compute-0 ceph-mon[74243]: pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:08 compute-0 ceph-mon[74243]: pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:49:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:49:11.013 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:49:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:49:11.013 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:49:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:49:11.014 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:49:11 compute-0 nova_compute[259400]: 2025-10-11 04:49:11.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:49:11 compute-0 nova_compute[259400]: 2025-10-11 04:49:11.197 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:49:11 compute-0 ceph-mon[74243]: pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:12 compute-0 nova_compute[259400]: 2025-10-11 04:49:12.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:49:12 compute-0 nova_compute[259400]: 2025-10-11 04:49:12.197 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:49:12 compute-0 nova_compute[259400]: 2025-10-11 04:49:12.197 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:49:12 compute-0 nova_compute[259400]: 2025-10-11 04:49:12.219 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:49:12 compute-0 nova_compute[259400]: 2025-10-11 04:49:12.219 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:49:12 compute-0 nova_compute[259400]: 2025-10-11 04:49:12.220 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:49:12 compute-0 nova_compute[259400]: 2025-10-11 04:49:12.220 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:49:12 compute-0 nova_compute[259400]: 2025-10-11 04:49:12.253 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:49:12 compute-0 nova_compute[259400]: 2025-10-11 04:49:12.254 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:49:12 compute-0 nova_compute[259400]: 2025-10-11 04:49:12.255 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:49:12 compute-0 nova_compute[259400]: 2025-10-11 04:49:12.255 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:49:12 compute-0 nova_compute[259400]: 2025-10-11 04:49:12.256 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:49:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:49:12 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/153333341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:49:12 compute-0 nova_compute[259400]: 2025-10-11 04:49:12.724 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:49:12 compute-0 nova_compute[259400]: 2025-10-11 04:49:12.899 2 WARNING nova.virt.libvirt.driver [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:49:12 compute-0 nova_compute[259400]: 2025-10-11 04:49:12.900 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5172MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:49:12 compute-0 nova_compute[259400]: 2025-10-11 04:49:12.901 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:49:12 compute-0 nova_compute[259400]: 2025-10-11 04:49:12.901 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:49:12 compute-0 nova_compute[259400]: 2025-10-11 04:49:12.997 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:49:12 compute-0 nova_compute[259400]: 2025-10-11 04:49:12.997 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:49:13 compute-0 nova_compute[259400]: 2025-10-11 04:49:13.014 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:49:13 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:49:13 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3875326687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:49:13 compute-0 nova_compute[259400]: 2025-10-11 04:49:13.424 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:49:13 compute-0 nova_compute[259400]: 2025-10-11 04:49:13.432 2 DEBUG nova.compute.provider_tree [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f05a244-23b6-4149-9b5a-a525e5860d18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:49:13 compute-0 nova_compute[259400]: 2025-10-11 04:49:13.464 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:49:13 compute-0 nova_compute[259400]: 2025-10-11 04:49:13.467 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:49:13 compute-0 nova_compute[259400]: 2025-10-11 04:49:13.467 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:49:13 compute-0 ceph-mon[74243]: pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:13 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/153333341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:49:13 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3875326687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:49:14 compute-0 nova_compute[259400]: 2025-10-11 04:49:14.444 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:49:14 compute-0 nova_compute[259400]: 2025-10-11 04:49:14.445 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:49:14 compute-0 nova_compute[259400]: 2025-10-11 04:49:14.446 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:49:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:49:15 compute-0 nova_compute[259400]: 2025-10-11 04:49:15.192 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:49:15 compute-0 ceph-mon[74243]: pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:16 compute-0 podman[263177]: 2025-10-11 04:49:16.457520805 +0000 UTC m=+0.097582680 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 04:49:16 compute-0 podman[263178]: 2025-10-11 04:49:16.466938821 +0000 UTC m=+0.103540069 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:49:16 compute-0 podman[263176]: 2025-10-11 04:49:16.506559565 +0000 UTC m=+0.151988214 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 11 04:49:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:17 compute-0 ceph-mon[74243]: pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:19 compute-0 ceph-mon[74243]: pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:49:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:20 compute-0 ceph-mon[74243]: pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:23 compute-0 ceph-mon[74243]: pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:49:25 compute-0 ceph-mon[74243]: pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:49:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:49:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:49:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:49:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:49:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:49:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:27 compute-0 podman[263242]: 2025-10-11 04:49:27.426846834 +0000 UTC m=+0.079295880 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 11 04:49:27 compute-0 ceph-mon[74243]: pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:29 compute-0 ceph-mon[74243]: pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:49:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:31 compute-0 ceph-mon[74243]: pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:31 compute-0 PackageKit[191266]: daemon quit
Oct 11 04:49:31 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 11 04:49:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:32 compute-0 ceph-mon[74243]: pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:49:35 compute-0 ceph-mon[74243]: pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:36 compute-0 sudo[263261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:49:36 compute-0 sudo[263261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:36 compute-0 sudo[263261]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:36 compute-0 sudo[263286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:49:36 compute-0 sudo[263286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:36 compute-0 sudo[263286]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:36 compute-0 sudo[263311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:49:36 compute-0 sudo[263311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:36 compute-0 sudo[263311]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:36 compute-0 sudo[263336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 04:49:36 compute-0 sudo[263336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:37 compute-0 podman[263434]: 2025-10-11 04:49:37.129174075 +0000 UTC m=+0.093209369 container exec 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:49:37 compute-0 podman[263434]: 2025-10-11 04:49:37.307372696 +0000 UTC m=+0.271407920 container exec_died 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:49:37 compute-0 ceph-mon[74243]: pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:38 compute-0 sudo[263336]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:49:38 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:49:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:49:38 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:49:38 compute-0 sudo[263594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:49:38 compute-0 sudo[263594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:38 compute-0 sudo[263594]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:38 compute-0 sudo[263619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:49:38 compute-0 sudo[263619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:38 compute-0 sudo[263619]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:38 compute-0 sudo[263644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:49:38 compute-0 sudo[263644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:38 compute-0 sudo[263644]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:38 compute-0 sudo[263669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:49:38 compute-0 sudo[263669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:38 compute-0 sudo[263669]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:49:38 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:49:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:49:38 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:49:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:49:38 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:49:38 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 03cd01d7-a937-416a-ac42-2479c00bade4 does not exist
Oct 11 04:49:38 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 987ddd4d-b125-4bd9-920b-2a5f9eaae2bf does not exist
Oct 11 04:49:38 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 61f2f491-65ef-4e7a-aff5-21a91d4df08c does not exist
Oct 11 04:49:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:49:38 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:49:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:49:38 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:49:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:49:38 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:49:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:49:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:49:39 compute-0 ceph-mon[74243]: pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:49:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:49:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:49:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:49:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:49:39 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:49:39 compute-0 sudo[263725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:49:39 compute-0 sudo[263725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:39 compute-0 sudo[263725]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:39 compute-0 sudo[263750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:49:39 compute-0 sudo[263750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:39 compute-0 sudo[263750]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:39 compute-0 sudo[263775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:49:39 compute-0 sudo[263775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:39 compute-0 sudo[263775]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:39 compute-0 sudo[263800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:49:39 compute-0 sudo[263800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:39 compute-0 podman[263865]: 2025-10-11 04:49:39.689929072 +0000 UTC m=+0.060243003 container create 9b6c37170655b49b78adca272aef4f08253a4afccb9e7e2b1883435fa044f9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:49:39 compute-0 systemd[1]: Started libpod-conmon-9b6c37170655b49b78adca272aef4f08253a4afccb9e7e2b1883435fa044f9cc.scope.
Oct 11 04:49:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:49:39 compute-0 podman[263865]: 2025-10-11 04:49:39.670860143 +0000 UTC m=+0.041174064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:49:39 compute-0 podman[263865]: 2025-10-11 04:49:39.781258533 +0000 UTC m=+0.151572474 container init 9b6c37170655b49b78adca272aef4f08253a4afccb9e7e2b1883435fa044f9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:49:39 compute-0 podman[263865]: 2025-10-11 04:49:39.788493605 +0000 UTC m=+0.158807536 container start 9b6c37170655b49b78adca272aef4f08253a4afccb9e7e2b1883435fa044f9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:49:39 compute-0 podman[263865]: 2025-10-11 04:49:39.792618008 +0000 UTC m=+0.162931949 container attach 9b6c37170655b49b78adca272aef4f08253a4afccb9e7e2b1883435fa044f9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nash, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:49:39 compute-0 dreamy_nash[263881]: 167 167
Oct 11 04:49:39 compute-0 systemd[1]: libpod-9b6c37170655b49b78adca272aef4f08253a4afccb9e7e2b1883435fa044f9cc.scope: Deactivated successfully.
Oct 11 04:49:39 compute-0 podman[263865]: 2025-10-11 04:49:39.794065825 +0000 UTC m=+0.164379726 container died 9b6c37170655b49b78adca272aef4f08253a4afccb9e7e2b1883435fa044f9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nash, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:49:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-26a0e649e3692181d41560e39d2a2dbd69b72591ce66284a5cf693bd78ca81fa-merged.mount: Deactivated successfully.
Oct 11 04:49:39 compute-0 podman[263865]: 2025-10-11 04:49:39.829149595 +0000 UTC m=+0.199463486 container remove 9b6c37170655b49b78adca272aef4f08253a4afccb9e7e2b1883435fa044f9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:49:39 compute-0 systemd[1]: libpod-conmon-9b6c37170655b49b78adca272aef4f08253a4afccb9e7e2b1883435fa044f9cc.scope: Deactivated successfully.
Oct 11 04:49:40 compute-0 podman[263904]: 2025-10-11 04:49:40.007752586 +0000 UTC m=+0.068093020 container create 67a9e65f5f310d4d54ee3a4dc9424398426b4d97134fa8e2cdacc824f981299c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:49:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:49:40 compute-0 systemd[1]: Started libpod-conmon-67a9e65f5f310d4d54ee3a4dc9424398426b4d97134fa8e2cdacc824f981299c.scope.
Oct 11 04:49:40 compute-0 podman[263904]: 2025-10-11 04:49:39.979836145 +0000 UTC m=+0.040176619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:49:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a776a5363e2ddd870724ab6c4a0484fa8accfb682361b5d206bb9f40ada517ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a776a5363e2ddd870724ab6c4a0484fa8accfb682361b5d206bb9f40ada517ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a776a5363e2ddd870724ab6c4a0484fa8accfb682361b5d206bb9f40ada517ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a776a5363e2ddd870724ab6c4a0484fa8accfb682361b5d206bb9f40ada517ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a776a5363e2ddd870724ab6c4a0484fa8accfb682361b5d206bb9f40ada517ca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:40 compute-0 podman[263904]: 2025-10-11 04:49:40.112893484 +0000 UTC m=+0.173233958 container init 67a9e65f5f310d4d54ee3a4dc9424398426b4d97134fa8e2cdacc824f981299c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_matsumoto, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:49:40 compute-0 podman[263904]: 2025-10-11 04:49:40.123284864 +0000 UTC m=+0.183625258 container start 67a9e65f5f310d4d54ee3a4dc9424398426b4d97134fa8e2cdacc824f981299c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:49:40 compute-0 podman[263904]: 2025-10-11 04:49:40.126428013 +0000 UTC m=+0.186768487 container attach 67a9e65f5f310d4d54ee3a4dc9424398426b4d97134fa8e2cdacc824f981299c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_matsumoto, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:49:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:41 compute-0 dreamy_matsumoto[263921]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:49:41 compute-0 dreamy_matsumoto[263921]: --> relative data size: 1.0
Oct 11 04:49:41 compute-0 dreamy_matsumoto[263921]: --> All data devices are unavailable
Oct 11 04:49:41 compute-0 systemd[1]: libpod-67a9e65f5f310d4d54ee3a4dc9424398426b4d97134fa8e2cdacc824f981299c.scope: Deactivated successfully.
Oct 11 04:49:41 compute-0 systemd[1]: libpod-67a9e65f5f310d4d54ee3a4dc9424398426b4d97134fa8e2cdacc824f981299c.scope: Consumed 1.070s CPU time.
Oct 11 04:49:41 compute-0 podman[263904]: 2025-10-11 04:49:41.245703875 +0000 UTC m=+1.306044309 container died 67a9e65f5f310d4d54ee3a4dc9424398426b4d97134fa8e2cdacc824f981299c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:49:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a776a5363e2ddd870724ab6c4a0484fa8accfb682361b5d206bb9f40ada517ca-merged.mount: Deactivated successfully.
Oct 11 04:49:41 compute-0 podman[263904]: 2025-10-11 04:49:41.332565224 +0000 UTC m=+1.392905658 container remove 67a9e65f5f310d4d54ee3a4dc9424398426b4d97134fa8e2cdacc824f981299c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_matsumoto, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:49:41 compute-0 systemd[1]: libpod-conmon-67a9e65f5f310d4d54ee3a4dc9424398426b4d97134fa8e2cdacc824f981299c.scope: Deactivated successfully.
Oct 11 04:49:41 compute-0 sudo[263800]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:41 compute-0 sudo[263963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:49:41 compute-0 sudo[263963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:41 compute-0 sudo[263963]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:41 compute-0 sudo[263988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:49:41 compute-0 sudo[263988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:41 compute-0 sudo[263988]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:41 compute-0 sudo[264013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:49:41 compute-0 sudo[264013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:41 compute-0 sudo[264013]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:41 compute-0 ceph-mon[74243]: pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:41 compute-0 sudo[264038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:49:41 compute-0 sudo[264038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:42 compute-0 podman[264103]: 2025-10-11 04:49:42.161360207 +0000 UTC m=+0.059405130 container create 5721424933a033cb0b1e3e503c7c60eebfda3d640353bfb6a3352a860b3ec316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_elion, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:49:42 compute-0 systemd[1]: Started libpod-conmon-5721424933a033cb0b1e3e503c7c60eebfda3d640353bfb6a3352a860b3ec316.scope.
Oct 11 04:49:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:49:42 compute-0 podman[264103]: 2025-10-11 04:49:42.132532205 +0000 UTC m=+0.030577208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:49:42 compute-0 podman[264103]: 2025-10-11 04:49:42.233730393 +0000 UTC m=+0.131775356 container init 5721424933a033cb0b1e3e503c7c60eebfda3d640353bfb6a3352a860b3ec316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_elion, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:49:42 compute-0 podman[264103]: 2025-10-11 04:49:42.240584925 +0000 UTC m=+0.138629878 container start 5721424933a033cb0b1e3e503c7c60eebfda3d640353bfb6a3352a860b3ec316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:49:42 compute-0 nostalgic_elion[264119]: 167 167
Oct 11 04:49:42 compute-0 podman[264103]: 2025-10-11 04:49:42.245012306 +0000 UTC m=+0.143057239 container attach 5721424933a033cb0b1e3e503c7c60eebfda3d640353bfb6a3352a860b3ec316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_elion, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct 11 04:49:42 compute-0 systemd[1]: libpod-5721424933a033cb0b1e3e503c7c60eebfda3d640353bfb6a3352a860b3ec316.scope: Deactivated successfully.
Oct 11 04:49:42 compute-0 podman[264103]: 2025-10-11 04:49:42.245826127 +0000 UTC m=+0.143871060 container died 5721424933a033cb0b1e3e503c7c60eebfda3d640353bfb6a3352a860b3ec316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_elion, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:49:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f437b2b5e3f54ac214b29edcfc38337cb97693512872dbc0d797b91634ff3f4-merged.mount: Deactivated successfully.
Oct 11 04:49:42 compute-0 podman[264103]: 2025-10-11 04:49:42.287292307 +0000 UTC m=+0.185337250 container remove 5721424933a033cb0b1e3e503c7c60eebfda3d640353bfb6a3352a860b3ec316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_elion, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:49:42 compute-0 systemd[1]: libpod-conmon-5721424933a033cb0b1e3e503c7c60eebfda3d640353bfb6a3352a860b3ec316.scope: Deactivated successfully.
Oct 11 04:49:42 compute-0 podman[264144]: 2025-10-11 04:49:42.423968486 +0000 UTC m=+0.034670081 container create 3f30d1cefb0d38defda7ad3d2fdb1a648153e844c935e0a330ff5a43a90d21d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:49:42 compute-0 systemd[1]: Started libpod-conmon-3f30d1cefb0d38defda7ad3d2fdb1a648153e844c935e0a330ff5a43a90d21d0.scope.
Oct 11 04:49:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca693e01fb7de701006bb0063b519a3d2b2576f08e12fec2875b8096943faecb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca693e01fb7de701006bb0063b519a3d2b2576f08e12fec2875b8096943faecb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca693e01fb7de701006bb0063b519a3d2b2576f08e12fec2875b8096943faecb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca693e01fb7de701006bb0063b519a3d2b2576f08e12fec2875b8096943faecb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:42 compute-0 podman[264144]: 2025-10-11 04:49:42.409860712 +0000 UTC m=+0.020562337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:49:42 compute-0 podman[264144]: 2025-10-11 04:49:42.510706712 +0000 UTC m=+0.121408337 container init 3f30d1cefb0d38defda7ad3d2fdb1a648153e844c935e0a330ff5a43a90d21d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_antonelli, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 04:49:42 compute-0 podman[264144]: 2025-10-11 04:49:42.522000576 +0000 UTC m=+0.132702171 container start 3f30d1cefb0d38defda7ad3d2fdb1a648153e844c935e0a330ff5a43a90d21d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_antonelli, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:49:42 compute-0 podman[264144]: 2025-10-11 04:49:42.52578042 +0000 UTC m=+0.136482075 container attach 3f30d1cefb0d38defda7ad3d2fdb1a648153e844c935e0a330ff5a43a90d21d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_antonelli, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:49:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]: {
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:     "0": [
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:         {
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "devices": [
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "/dev/loop3"
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             ],
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "lv_name": "ceph_lv0",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "lv_size": "21470642176",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "name": "ceph_lv0",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "tags": {
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.cluster_name": "ceph",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.crush_device_class": "",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.encrypted": "0",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.osd_id": "0",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.type": "block",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.vdo": "0"
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             },
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "type": "block",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "vg_name": "ceph_vg0"
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:         }
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:     ],
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:     "1": [
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:         {
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "devices": [
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "/dev/loop4"
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             ],
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "lv_name": "ceph_lv1",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "lv_size": "21470642176",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "name": "ceph_lv1",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "tags": {
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.cluster_name": "ceph",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.crush_device_class": "",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.encrypted": "0",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.osd_id": "1",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.type": "block",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.vdo": "0"
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             },
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "type": "block",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "vg_name": "ceph_vg1"
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:         }
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:     ],
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:     "2": [
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:         {
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "devices": [
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "/dev/loop5"
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             ],
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "lv_name": "ceph_lv2",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "lv_size": "21470642176",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "name": "ceph_lv2",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "tags": {
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.cluster_name": "ceph",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.crush_device_class": "",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.encrypted": "0",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.osd_id": "2",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.type": "block",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:                 "ceph.vdo": "0"
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             },
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "type": "block",
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:             "vg_name": "ceph_vg2"
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:         }
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]:     ]
Oct 11 04:49:43 compute-0 youthful_antonelli[264160]: }
Oct 11 04:49:43 compute-0 systemd[1]: libpod-3f30d1cefb0d38defda7ad3d2fdb1a648153e844c935e0a330ff5a43a90d21d0.scope: Deactivated successfully.
Oct 11 04:49:43 compute-0 podman[264144]: 2025-10-11 04:49:43.350720988 +0000 UTC m=+0.961422633 container died 3f30d1cefb0d38defda7ad3d2fdb1a648153e844c935e0a330ff5a43a90d21d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:49:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca693e01fb7de701006bb0063b519a3d2b2576f08e12fec2875b8096943faecb-merged.mount: Deactivated successfully.
Oct 11 04:49:43 compute-0 podman[264144]: 2025-10-11 04:49:43.429161536 +0000 UTC m=+1.039863171 container remove 3f30d1cefb0d38defda7ad3d2fdb1a648153e844c935e0a330ff5a43a90d21d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_antonelli, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:49:43 compute-0 systemd[1]: libpod-conmon-3f30d1cefb0d38defda7ad3d2fdb1a648153e844c935e0a330ff5a43a90d21d0.scope: Deactivated successfully.
Oct 11 04:49:43 compute-0 sudo[264038]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:43 compute-0 sudo[264184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:49:43 compute-0 sudo[264184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:43 compute-0 sudo[264184]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:43 compute-0 sudo[264209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:49:43 compute-0 sudo[264209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:43 compute-0 sudo[264209]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:43 compute-0 sudo[264234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:49:43 compute-0 sudo[264234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:43 compute-0 sudo[264234]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:43 compute-0 ceph-mon[74243]: pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:43 compute-0 sudo[264259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:49:43 compute-0 sudo[264259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:44 compute-0 podman[264323]: 2025-10-11 04:49:44.145121089 +0000 UTC m=+0.066390077 container create 17cdad4eea89eb73155851eeaa044f92a278f82a82c38ac13622ad8893a793ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hellman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:49:44 compute-0 systemd[1]: Started libpod-conmon-17cdad4eea89eb73155851eeaa044f92a278f82a82c38ac13622ad8893a793ec.scope.
Oct 11 04:49:44 compute-0 podman[264323]: 2025-10-11 04:49:44.117614658 +0000 UTC m=+0.038883706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:49:44 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:49:44 compute-0 podman[264323]: 2025-10-11 04:49:44.239745513 +0000 UTC m=+0.161014561 container init 17cdad4eea89eb73155851eeaa044f92a278f82a82c38ac13622ad8893a793ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:49:44 compute-0 podman[264323]: 2025-10-11 04:49:44.253637181 +0000 UTC m=+0.174906149 container start 17cdad4eea89eb73155851eeaa044f92a278f82a82c38ac13622ad8893a793ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hellman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:49:44 compute-0 great_hellman[264339]: 167 167
Oct 11 04:49:44 compute-0 podman[264323]: 2025-10-11 04:49:44.258053172 +0000 UTC m=+0.179322220 container attach 17cdad4eea89eb73155851eeaa044f92a278f82a82c38ac13622ad8893a793ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hellman, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:49:44 compute-0 systemd[1]: libpod-17cdad4eea89eb73155851eeaa044f92a278f82a82c38ac13622ad8893a793ec.scope: Deactivated successfully.
Oct 11 04:49:44 compute-0 podman[264323]: 2025-10-11 04:49:44.259273313 +0000 UTC m=+0.180542301 container died 17cdad4eea89eb73155851eeaa044f92a278f82a82c38ac13622ad8893a793ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hellman, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:49:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1ba7cade22c56519fbe33ff9b75f557ac3e8c6b056ceb730c7e58b8f3337a5d-merged.mount: Deactivated successfully.
Oct 11 04:49:44 compute-0 podman[264323]: 2025-10-11 04:49:44.307109803 +0000 UTC m=+0.228378771 container remove 17cdad4eea89eb73155851eeaa044f92a278f82a82c38ac13622ad8893a793ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 04:49:44 compute-0 systemd[1]: libpod-conmon-17cdad4eea89eb73155851eeaa044f92a278f82a82c38ac13622ad8893a793ec.scope: Deactivated successfully.
Oct 11 04:49:44 compute-0 podman[264363]: 2025-10-11 04:49:44.5214337 +0000 UTC m=+0.058242002 container create 5571ab1e780d7e3dff336a7f258da93b0391ccf169d1ea50c4105c75367b0cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wright, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:49:44 compute-0 systemd[1]: Started libpod-conmon-5571ab1e780d7e3dff336a7f258da93b0391ccf169d1ea50c4105c75367b0cd5.scope.
Oct 11 04:49:44 compute-0 podman[264363]: 2025-10-11 04:49:44.496140425 +0000 UTC m=+0.032948797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:49:44 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:49:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d3aa7b9756d8c6a5ba8aed0df602fd4a300daa67fff820b0f355f828e56216/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d3aa7b9756d8c6a5ba8aed0df602fd4a300daa67fff820b0f355f828e56216/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d3aa7b9756d8c6a5ba8aed0df602fd4a300daa67fff820b0f355f828e56216/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d3aa7b9756d8c6a5ba8aed0df602fd4a300daa67fff820b0f355f828e56216/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:44 compute-0 podman[264363]: 2025-10-11 04:49:44.627884041 +0000 UTC m=+0.164692383 container init 5571ab1e780d7e3dff336a7f258da93b0391ccf169d1ea50c4105c75367b0cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:49:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:44 compute-0 podman[264363]: 2025-10-11 04:49:44.646254092 +0000 UTC m=+0.183062424 container start 5571ab1e780d7e3dff336a7f258da93b0391ccf169d1ea50c4105c75367b0cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:49:44 compute-0 podman[264363]: 2025-10-11 04:49:44.650849237 +0000 UTC m=+0.187657559 container attach 5571ab1e780d7e3dff336a7f258da93b0391ccf169d1ea50c4105c75367b0cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wright, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:49:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:49:45 compute-0 ceph-mon[74243]: pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:45 compute-0 sweet_wright[264380]: {
Oct 11 04:49:45 compute-0 sweet_wright[264380]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:49:45 compute-0 sweet_wright[264380]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:49:45 compute-0 sweet_wright[264380]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:49:45 compute-0 sweet_wright[264380]:         "osd_id": 1,
Oct 11 04:49:45 compute-0 sweet_wright[264380]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:49:45 compute-0 sweet_wright[264380]:         "type": "bluestore"
Oct 11 04:49:45 compute-0 sweet_wright[264380]:     },
Oct 11 04:49:45 compute-0 sweet_wright[264380]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:49:45 compute-0 sweet_wright[264380]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:49:45 compute-0 sweet_wright[264380]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:49:45 compute-0 sweet_wright[264380]:         "osd_id": 0,
Oct 11 04:49:45 compute-0 sweet_wright[264380]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:49:45 compute-0 sweet_wright[264380]:         "type": "bluestore"
Oct 11 04:49:45 compute-0 sweet_wright[264380]:     },
Oct 11 04:49:45 compute-0 sweet_wright[264380]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:49:45 compute-0 sweet_wright[264380]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:49:45 compute-0 sweet_wright[264380]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:49:45 compute-0 sweet_wright[264380]:         "osd_id": 2,
Oct 11 04:49:45 compute-0 sweet_wright[264380]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:49:45 compute-0 sweet_wright[264380]:         "type": "bluestore"
Oct 11 04:49:45 compute-0 sweet_wright[264380]:     }
Oct 11 04:49:45 compute-0 sweet_wright[264380]: }
Oct 11 04:49:45 compute-0 systemd[1]: libpod-5571ab1e780d7e3dff336a7f258da93b0391ccf169d1ea50c4105c75367b0cd5.scope: Deactivated successfully.
Oct 11 04:49:45 compute-0 systemd[1]: libpod-5571ab1e780d7e3dff336a7f258da93b0391ccf169d1ea50c4105c75367b0cd5.scope: Consumed 1.146s CPU time.
Oct 11 04:49:45 compute-0 podman[264363]: 2025-10-11 04:49:45.781662777 +0000 UTC m=+1.318471079 container died 5571ab1e780d7e3dff336a7f258da93b0391ccf169d1ea50c4105c75367b0cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wright, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:49:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-24d3aa7b9756d8c6a5ba8aed0df602fd4a300daa67fff820b0f355f828e56216-merged.mount: Deactivated successfully.
Oct 11 04:49:45 compute-0 podman[264363]: 2025-10-11 04:49:45.847661403 +0000 UTC m=+1.384469735 container remove 5571ab1e780d7e3dff336a7f258da93b0391ccf169d1ea50c4105c75367b0cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:49:45 compute-0 systemd[1]: libpod-conmon-5571ab1e780d7e3dff336a7f258da93b0391ccf169d1ea50c4105c75367b0cd5.scope: Deactivated successfully.
Oct 11 04:49:45 compute-0 sudo[264259]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:49:45 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:49:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:49:45 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:49:45 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev f6843ff9-6abf-459a-850c-e23ab369b09f does not exist
Oct 11 04:49:45 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 4870225b-865b-4c7b-84ed-c1aa4121b189 does not exist
Oct 11 04:49:46 compute-0 sudo[264425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:49:46 compute-0 sudo[264425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:46 compute-0 sudo[264425]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:46 compute-0 sudo[264450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:49:46 compute-0 sudo[264450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:49:46 compute-0 sudo[264450]: pam_unix(sudo:session): session closed for user root
Oct 11 04:49:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:46 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:49:46 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:49:46 compute-0 ceph-mon[74243]: pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:47 compute-0 podman[264476]: 2025-10-11 04:49:47.420203097 +0000 UTC m=+0.066176071 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2)
Oct 11 04:49:47 compute-0 podman[264477]: 2025-10-11 04:49:47.429408808 +0000 UTC m=+0.075409183 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd)
Oct 11 04:49:47 compute-0 podman[264475]: 2025-10-11 04:49:47.465170915 +0000 UTC m=+0.109069387 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 04:49:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:49 compute-0 ceph-mon[74243]: pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:49:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:51 compute-0 ceph-mon[74243]: pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:53 compute-0 ceph-mon[74243]: pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:54 compute-0 ceph-mon[74243]: pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:49:56
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'volumes', 'vms', 'backups', 'default.rgw.meta']
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:49:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:57 compute-0 ceph-mon[74243]: pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:58 compute-0 podman[264540]: 2025-10-11 04:49:58.423536182 +0000 UTC m=+0.074973282 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct 11 04:49:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:49:59 compute-0 ceph-mon[74243]: pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:50:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:01 compute-0 ceph-mon[74243]: pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:02 compute-0 ceph-mon[74243]: pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:50:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4237894169' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:50:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:50:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4237894169' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:50:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/4237894169' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:50:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/4237894169' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:50:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:04 compute-0 ceph-mon[74243]: pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:50:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:07 compute-0 ceph-mon[74243]: pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:09 compute-0 ceph-mon[74243]: pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:50:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:50:11.014 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:50:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:50:11.014 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:50:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:50:11.015 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:50:11 compute-0 ceph-mon[74243]: pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.191 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.212 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.213 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.213 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.227 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.227 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.228 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.228 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.257 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.258 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.258 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.258 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.258 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:50:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:50:12 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/842555920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.714 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:50:12 compute-0 ceph-mon[74243]: pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:12 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/842555920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.846 2 WARNING nova.virt.libvirt.driver [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.847 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5184MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.848 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.848 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.960 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.962 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:50:12 compute-0 nova_compute[259400]: 2025-10-11 04:50:12.987 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:50:13 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:50:13 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2108062526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:50:13 compute-0 nova_compute[259400]: 2025-10-11 04:50:13.455 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:50:13 compute-0 nova_compute[259400]: 2025-10-11 04:50:13.462 2 DEBUG nova.compute.provider_tree [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f05a244-23b6-4149-9b5a-a525e5860d18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:50:13 compute-0 nova_compute[259400]: 2025-10-11 04:50:13.486 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:50:13 compute-0 nova_compute[259400]: 2025-10-11 04:50:13.488 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:50:13 compute-0 nova_compute[259400]: 2025-10-11 04:50:13.488 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:50:13 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2108062526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:50:14 compute-0 nova_compute[259400]: 2025-10-11 04:50:14.458 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:50:14 compute-0 nova_compute[259400]: 2025-10-11 04:50:14.458 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:50:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:14 compute-0 ceph-mon[74243]: pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:50:15 compute-0 nova_compute[259400]: 2025-10-11 04:50:15.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:50:15 compute-0 nova_compute[259400]: 2025-10-11 04:50:15.197 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:50:16 compute-0 nova_compute[259400]: 2025-10-11 04:50:16.197 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:50:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:17 compute-0 nova_compute[259400]: 2025-10-11 04:50:17.192 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:50:17 compute-0 ceph-mon[74243]: pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:18 compute-0 podman[264606]: 2025-10-11 04:50:18.446609993 +0000 UTC m=+0.092459421 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3)
Oct 11 04:50:18 compute-0 podman[264605]: 2025-10-11 04:50:18.454934822 +0000 UTC m=+0.099330054 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 04:50:18 compute-0 podman[264604]: 2025-10-11 04:50:18.463764283 +0000 UTC m=+0.111086218 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 04:50:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:19 compute-0 ceph-mon[74243]: pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:50:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:20 compute-0 ceph-mon[74243]: pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:23 compute-0 ceph-mon[74243]: pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:50:25 compute-0 ceph-mon[74243]: pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:50:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:50:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:50:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:50:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:50:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:50:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:26 compute-0 ceph-mon[74243]: pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:29 compute-0 podman[264665]: 2025-10-11 04:50:29.440465254 +0000 UTC m=+0.089755806 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:50:29 compute-0 ceph-mon[74243]: pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:50:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:31 compute-0 ceph-mon[74243]: pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:32 compute-0 ceph-mon[74243]: pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:50:35 compute-0 ceph-mon[74243]: pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:37 compute-0 ceph-mon[74243]: pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:38 compute-0 ceph-mon[74243]: pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:50:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:41 compute-0 ceph-mon[74243]: pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:43 compute-0 ceph-mon[74243]: pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:44 compute-0 ceph-mon[74243]: pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:50:46 compute-0 sudo[264685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:50:46 compute-0 sudo[264685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:50:46 compute-0 sudo[264685]: pam_unix(sudo:session): session closed for user root
Oct 11 04:50:46 compute-0 sudo[264710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:50:46 compute-0 sudo[264710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:50:46 compute-0 sudo[264710]: pam_unix(sudo:session): session closed for user root
Oct 11 04:50:46 compute-0 sudo[264735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:50:46 compute-0 sudo[264735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:50:46 compute-0 sudo[264735]: pam_unix(sudo:session): session closed for user root
Oct 11 04:50:46 compute-0 sudo[264760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:50:46 compute-0 sudo[264760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:50:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:47 compute-0 sudo[264760]: pam_unix(sudo:session): session closed for user root
Oct 11 04:50:47 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:50:47 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:50:47 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:50:47 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:50:47 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:50:47 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:50:47 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev a99b20ff-6ea4-4cb4-8526-677feca6cebd does not exist
Oct 11 04:50:47 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 7c72e4c5-7307-4666-99b1-72183f058335 does not exist
Oct 11 04:50:47 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 1312d608-6ac6-4fab-871e-279d2f0fcd79 does not exist
Oct 11 04:50:47 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:50:47 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:50:47 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:50:47 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:50:47 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:50:47 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:50:47 compute-0 sudo[264816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:50:47 compute-0 sudo[264816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:50:47 compute-0 sudo[264816]: pam_unix(sudo:session): session closed for user root
Oct 11 04:50:47 compute-0 sudo[264841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:50:47 compute-0 sudo[264841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:50:47 compute-0 sudo[264841]: pam_unix(sudo:session): session closed for user root
Oct 11 04:50:47 compute-0 sudo[264866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:50:47 compute-0 sudo[264866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:50:47 compute-0 sudo[264866]: pam_unix(sudo:session): session closed for user root
Oct 11 04:50:47 compute-0 sudo[264891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:50:47 compute-0 sudo[264891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:50:47 compute-0 ceph-mon[74243]: pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:47 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:50:47 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:50:47 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:50:47 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:50:47 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:50:47 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:50:47 compute-0 podman[264957]: 2025-10-11 04:50:47.821394505 +0000 UTC m=+0.037014261 container create e26e43dd8ee6241ffc6c14b62868db16d7384a3837deaa58b394fdc94fcdbf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_stonebraker, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:50:47 compute-0 systemd[1]: Started libpod-conmon-e26e43dd8ee6241ffc6c14b62868db16d7384a3837deaa58b394fdc94fcdbf33.scope.
Oct 11 04:50:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:50:47 compute-0 podman[264957]: 2025-10-11 04:50:47.896995325 +0000 UTC m=+0.112615111 container init e26e43dd8ee6241ffc6c14b62868db16d7384a3837deaa58b394fdc94fcdbf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:50:47 compute-0 podman[264957]: 2025-10-11 04:50:47.805244559 +0000 UTC m=+0.020864345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:50:47 compute-0 podman[264957]: 2025-10-11 04:50:47.905795146 +0000 UTC m=+0.121414902 container start e26e43dd8ee6241ffc6c14b62868db16d7384a3837deaa58b394fdc94fcdbf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:50:47 compute-0 podman[264957]: 2025-10-11 04:50:47.908891573 +0000 UTC m=+0.124511359 container attach e26e43dd8ee6241ffc6c14b62868db16d7384a3837deaa58b394fdc94fcdbf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_stonebraker, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:50:47 compute-0 modest_stonebraker[264973]: 167 167
Oct 11 04:50:47 compute-0 systemd[1]: libpod-e26e43dd8ee6241ffc6c14b62868db16d7384a3837deaa58b394fdc94fcdbf33.scope: Deactivated successfully.
Oct 11 04:50:47 compute-0 conmon[264973]: conmon e26e43dd8ee6241ffc6c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e26e43dd8ee6241ffc6c14b62868db16d7384a3837deaa58b394fdc94fcdbf33.scope/container/memory.events
Oct 11 04:50:47 compute-0 podman[264957]: 2025-10-11 04:50:47.913162871 +0000 UTC m=+0.128782647 container died e26e43dd8ee6241ffc6c14b62868db16d7384a3837deaa58b394fdc94fcdbf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:50:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8746abc68dd6cf86d3aa742646a45f070dc8e5898001891e6cbd91e85ebfd999-merged.mount: Deactivated successfully.
Oct 11 04:50:47 compute-0 podman[264957]: 2025-10-11 04:50:47.949924374 +0000 UTC m=+0.165544140 container remove e26e43dd8ee6241ffc6c14b62868db16d7384a3837deaa58b394fdc94fcdbf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:50:47 compute-0 systemd[1]: libpod-conmon-e26e43dd8ee6241ffc6c14b62868db16d7384a3837deaa58b394fdc94fcdbf33.scope: Deactivated successfully.
Oct 11 04:50:48 compute-0 podman[264999]: 2025-10-11 04:50:48.090351022 +0000 UTC m=+0.040905449 container create ff8fb26bb6b8d309fb42246cd513f0f36749ab59ee9b7ae986ae28c95f785c45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cartwright, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:50:48 compute-0 systemd[1]: Started libpod-conmon-ff8fb26bb6b8d309fb42246cd513f0f36749ab59ee9b7ae986ae28c95f785c45.scope.
Oct 11 04:50:48 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4669012886deef2c779fd399110a8900e8171a5ffa8d2adfd5a7134a55ff77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4669012886deef2c779fd399110a8900e8171a5ffa8d2adfd5a7134a55ff77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4669012886deef2c779fd399110a8900e8171a5ffa8d2adfd5a7134a55ff77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4669012886deef2c779fd399110a8900e8171a5ffa8d2adfd5a7134a55ff77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4669012886deef2c779fd399110a8900e8171a5ffa8d2adfd5a7134a55ff77/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:50:48 compute-0 podman[264999]: 2025-10-11 04:50:48.072711619 +0000 UTC m=+0.023266066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:50:48 compute-0 podman[264999]: 2025-10-11 04:50:48.173129232 +0000 UTC m=+0.123683709 container init ff8fb26bb6b8d309fb42246cd513f0f36749ab59ee9b7ae986ae28c95f785c45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cartwright, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:50:48 compute-0 podman[264999]: 2025-10-11 04:50:48.187974135 +0000 UTC m=+0.138528572 container start ff8fb26bb6b8d309fb42246cd513f0f36749ab59ee9b7ae986ae28c95f785c45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:50:48 compute-0 podman[264999]: 2025-10-11 04:50:48.191870542 +0000 UTC m=+0.142425059 container attach ff8fb26bb6b8d309fb42246cd513f0f36749ab59ee9b7ae986ae28c95f785c45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cartwright, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:50:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:48 compute-0 ceph-mon[74243]: pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:49 compute-0 brave_cartwright[265016]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:50:49 compute-0 brave_cartwright[265016]: --> relative data size: 1.0
Oct 11 04:50:49 compute-0 brave_cartwright[265016]: --> All data devices are unavailable
Oct 11 04:50:49 compute-0 systemd[1]: libpod-ff8fb26bb6b8d309fb42246cd513f0f36749ab59ee9b7ae986ae28c95f785c45.scope: Deactivated successfully.
Oct 11 04:50:49 compute-0 systemd[1]: libpod-ff8fb26bb6b8d309fb42246cd513f0f36749ab59ee9b7ae986ae28c95f785c45.scope: Consumed 1.122s CPU time.
Oct 11 04:50:49 compute-0 podman[264999]: 2025-10-11 04:50:49.353928966 +0000 UTC m=+1.304483443 container died ff8fb26bb6b8d309fb42246cd513f0f36749ab59ee9b7ae986ae28c95f785c45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cartwright, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:50:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d4669012886deef2c779fd399110a8900e8171a5ffa8d2adfd5a7134a55ff77-merged.mount: Deactivated successfully.
Oct 11 04:50:49 compute-0 podman[264999]: 2025-10-11 04:50:49.428429477 +0000 UTC m=+1.378983934 container remove ff8fb26bb6b8d309fb42246cd513f0f36749ab59ee9b7ae986ae28c95f785c45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:50:49 compute-0 sudo[264891]: pam_unix(sudo:session): session closed for user root
Oct 11 04:50:49 compute-0 systemd[1]: libpod-conmon-ff8fb26bb6b8d309fb42246cd513f0f36749ab59ee9b7ae986ae28c95f785c45.scope: Deactivated successfully.
Oct 11 04:50:49 compute-0 podman[265046]: 2025-10-11 04:50:49.480135836 +0000 UTC m=+0.118128808 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:50:49 compute-0 podman[265047]: 2025-10-11 04:50:49.480249569 +0000 UTC m=+0.113756809 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Oct 11 04:50:49 compute-0 podman[265045]: 2025-10-11 04:50:49.539510118 +0000 UTC m=+0.176527996 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 04:50:49 compute-0 sudo[265114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:50:49 compute-0 sudo[265114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:50:49 compute-0 sudo[265114]: pam_unix(sudo:session): session closed for user root
Oct 11 04:50:49 compute-0 sudo[265144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:50:49 compute-0 sudo[265144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:50:49 compute-0 sudo[265144]: pam_unix(sudo:session): session closed for user root
Oct 11 04:50:49 compute-0 sudo[265169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:50:49 compute-0 sudo[265169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:50:49 compute-0 sudo[265169]: pam_unix(sudo:session): session closed for user root
Oct 11 04:50:49 compute-0 sudo[265194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:50:49 compute-0 sudo[265194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:50:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:50:50 compute-0 podman[265261]: 2025-10-11 04:50:50.183405553 +0000 UTC m=+0.046329734 container create c4d2f0fa4f9643d413c523aa051f25676fdf222f4e2f6b5ea5fb95d884f1b5cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_robinson, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:50:50 compute-0 systemd[1]: Started libpod-conmon-c4d2f0fa4f9643d413c523aa051f25676fdf222f4e2f6b5ea5fb95d884f1b5cb.scope.
Oct 11 04:50:50 compute-0 podman[265261]: 2025-10-11 04:50:50.157557234 +0000 UTC m=+0.020481395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:50:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:50:50 compute-0 podman[265261]: 2025-10-11 04:50:50.272982133 +0000 UTC m=+0.135906314 container init c4d2f0fa4f9643d413c523aa051f25676fdf222f4e2f6b5ea5fb95d884f1b5cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_robinson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:50:50 compute-0 podman[265261]: 2025-10-11 04:50:50.280453941 +0000 UTC m=+0.143378082 container start c4d2f0fa4f9643d413c523aa051f25676fdf222f4e2f6b5ea5fb95d884f1b5cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_robinson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:50:50 compute-0 podman[265261]: 2025-10-11 04:50:50.284059292 +0000 UTC m=+0.146983523 container attach c4d2f0fa4f9643d413c523aa051f25676fdf222f4e2f6b5ea5fb95d884f1b5cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_robinson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:50:50 compute-0 priceless_robinson[265277]: 167 167
Oct 11 04:50:50 compute-0 systemd[1]: libpod-c4d2f0fa4f9643d413c523aa051f25676fdf222f4e2f6b5ea5fb95d884f1b5cb.scope: Deactivated successfully.
Oct 11 04:50:50 compute-0 podman[265261]: 2025-10-11 04:50:50.286276377 +0000 UTC m=+0.149200528 container died c4d2f0fa4f9643d413c523aa051f25676fdf222f4e2f6b5ea5fb95d884f1b5cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:50:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6e9abfb283c5d308bdb6bb5735fb276fd236f43ec9ff58be2c384558fd91bfe-merged.mount: Deactivated successfully.
Oct 11 04:50:50 compute-0 podman[265261]: 2025-10-11 04:50:50.322019775 +0000 UTC m=+0.184943906 container remove c4d2f0fa4f9643d413c523aa051f25676fdf222f4e2f6b5ea5fb95d884f1b5cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:50:50 compute-0 systemd[1]: libpod-conmon-c4d2f0fa4f9643d413c523aa051f25676fdf222f4e2f6b5ea5fb95d884f1b5cb.scope: Deactivated successfully.
Oct 11 04:50:50 compute-0 podman[265301]: 2025-10-11 04:50:50.517805764 +0000 UTC m=+0.041980306 container create e76005fc17542e03ddd68fd9aa636d30cba2d1245f15ef40f43cd343102ad68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gauss, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:50:50 compute-0 systemd[1]: Started libpod-conmon-e76005fc17542e03ddd68fd9aa636d30cba2d1245f15ef40f43cd343102ad68a.scope.
Oct 11 04:50:50 compute-0 podman[265301]: 2025-10-11 04:50:50.499439742 +0000 UTC m=+0.023614324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:50:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/666139e0ee84c92dd3ba7fd4714a8b96b6050f381cef0f0cee42c4c928c1c251/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/666139e0ee84c92dd3ba7fd4714a8b96b6050f381cef0f0cee42c4c928c1c251/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/666139e0ee84c92dd3ba7fd4714a8b96b6050f381cef0f0cee42c4c928c1c251/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/666139e0ee84c92dd3ba7fd4714a8b96b6050f381cef0f0cee42c4c928c1c251/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:50:50 compute-0 podman[265301]: 2025-10-11 04:50:50.624539825 +0000 UTC m=+0.148714397 container init e76005fc17542e03ddd68fd9aa636d30cba2d1245f15ef40f43cd343102ad68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:50:50 compute-0 podman[265301]: 2025-10-11 04:50:50.640123447 +0000 UTC m=+0.164298009 container start e76005fc17542e03ddd68fd9aa636d30cba2d1245f15ef40f43cd343102ad68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:50:50 compute-0 podman[265301]: 2025-10-11 04:50:50.643985784 +0000 UTC m=+0.168160336 container attach e76005fc17542e03ddd68fd9aa636d30cba2d1245f15ef40f43cd343102ad68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:50:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]: {
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:     "0": [
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:         {
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "devices": [
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "/dev/loop3"
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             ],
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "lv_name": "ceph_lv0",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "lv_size": "21470642176",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "name": "ceph_lv0",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "tags": {
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.cluster_name": "ceph",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.crush_device_class": "",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.encrypted": "0",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.osd_id": "0",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.type": "block",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.vdo": "0"
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             },
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "type": "block",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "vg_name": "ceph_vg0"
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:         }
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:     ],
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:     "1": [
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:         {
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "devices": [
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "/dev/loop4"
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             ],
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "lv_name": "ceph_lv1",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "lv_size": "21470642176",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "name": "ceph_lv1",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "tags": {
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.cluster_name": "ceph",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.crush_device_class": "",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.encrypted": "0",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.osd_id": "1",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.type": "block",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.vdo": "0"
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             },
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "type": "block",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "vg_name": "ceph_vg1"
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:         }
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:     ],
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:     "2": [
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:         {
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "devices": [
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "/dev/loop5"
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             ],
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "lv_name": "ceph_lv2",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "lv_size": "21470642176",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "name": "ceph_lv2",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "tags": {
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.cluster_name": "ceph",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.crush_device_class": "",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.encrypted": "0",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.osd_id": "2",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.type": "block",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:                 "ceph.vdo": "0"
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             },
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "type": "block",
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:             "vg_name": "ceph_vg2"
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:         }
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]:     ]
Oct 11 04:50:51 compute-0 inspiring_gauss[265318]: }
Oct 11 04:50:51 compute-0 systemd[1]: libpod-e76005fc17542e03ddd68fd9aa636d30cba2d1245f15ef40f43cd343102ad68a.scope: Deactivated successfully.
Oct 11 04:50:51 compute-0 podman[265301]: 2025-10-11 04:50:51.494942361 +0000 UTC m=+1.019116913 container died e76005fc17542e03ddd68fd9aa636d30cba2d1245f15ef40f43cd343102ad68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gauss, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:50:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-666139e0ee84c92dd3ba7fd4714a8b96b6050f381cef0f0cee42c4c928c1c251-merged.mount: Deactivated successfully.
Oct 11 04:50:51 compute-0 podman[265301]: 2025-10-11 04:50:51.580364677 +0000 UTC m=+1.104539249 container remove e76005fc17542e03ddd68fd9aa636d30cba2d1245f15ef40f43cd343102ad68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 11 04:50:51 compute-0 systemd[1]: libpod-conmon-e76005fc17542e03ddd68fd9aa636d30cba2d1245f15ef40f43cd343102ad68a.scope: Deactivated successfully.
Oct 11 04:50:51 compute-0 sudo[265194]: pam_unix(sudo:session): session closed for user root
Oct 11 04:50:51 compute-0 sudo[265341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:50:51 compute-0 ceph-mon[74243]: pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:51 compute-0 sudo[265341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:50:51 compute-0 sudo[265341]: pam_unix(sudo:session): session closed for user root
Oct 11 04:50:51 compute-0 sudo[265366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:50:51 compute-0 sudo[265366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:50:51 compute-0 sudo[265366]: pam_unix(sudo:session): session closed for user root
Oct 11 04:50:51 compute-0 sudo[265391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:50:51 compute-0 sudo[265391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:50:51 compute-0 sudo[265391]: pam_unix(sudo:session): session closed for user root
Oct 11 04:50:52 compute-0 sudo[265416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:50:52 compute-0 sudo[265416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:50:52 compute-0 podman[265481]: 2025-10-11 04:50:52.34115298 +0000 UTC m=+0.049254518 container create a69fb25077e0887a604b774d2b8d643c4df30957a9dc02bcf58928267fe36470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:50:52 compute-0 systemd[1]: Started libpod-conmon-a69fb25077e0887a604b774d2b8d643c4df30957a9dc02bcf58928267fe36470.scope.
Oct 11 04:50:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:50:52 compute-0 podman[265481]: 2025-10-11 04:50:52.321079976 +0000 UTC m=+0.029181524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:50:52 compute-0 podman[265481]: 2025-10-11 04:50:52.428942225 +0000 UTC m=+0.137043733 container init a69fb25077e0887a604b774d2b8d643c4df30957a9dc02bcf58928267fe36470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brown, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:50:52 compute-0 podman[265481]: 2025-10-11 04:50:52.442120847 +0000 UTC m=+0.150222355 container start a69fb25077e0887a604b774d2b8d643c4df30957a9dc02bcf58928267fe36470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:50:52 compute-0 awesome_brown[265497]: 167 167
Oct 11 04:50:52 compute-0 systemd[1]: libpod-a69fb25077e0887a604b774d2b8d643c4df30957a9dc02bcf58928267fe36470.scope: Deactivated successfully.
Oct 11 04:50:52 compute-0 podman[265481]: 2025-10-11 04:50:52.44785126 +0000 UTC m=+0.155952788 container attach a69fb25077e0887a604b774d2b8d643c4df30957a9dc02bcf58928267fe36470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brown, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:50:52 compute-0 conmon[265497]: conmon a69fb25077e0887a604b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a69fb25077e0887a604b774d2b8d643c4df30957a9dc02bcf58928267fe36470.scope/container/memory.events
Oct 11 04:50:52 compute-0 podman[265481]: 2025-10-11 04:50:52.44863317 +0000 UTC m=+0.156734698 container died a69fb25077e0887a604b774d2b8d643c4df30957a9dc02bcf58928267fe36470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:50:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f508db7e9b9d77d492b3b32c38029ad58e6f1e6244b8f9d885f7d1a9db7c52d6-merged.mount: Deactivated successfully.
Oct 11 04:50:52 compute-0 podman[265481]: 2025-10-11 04:50:52.497494198 +0000 UTC m=+0.205595736 container remove a69fb25077e0887a604b774d2b8d643c4df30957a9dc02bcf58928267fe36470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:50:52 compute-0 systemd[1]: libpod-conmon-a69fb25077e0887a604b774d2b8d643c4df30957a9dc02bcf58928267fe36470.scope: Deactivated successfully.
Oct 11 04:50:52 compute-0 podman[265520]: 2025-10-11 04:50:52.657122128 +0000 UTC m=+0.034610541 container create 35be96d21fe8398c75a5ebcc88db7b5c694b64f1a4c462efabedf1c8a0ed5e1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_shamir, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:50:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:52 compute-0 systemd[1]: Started libpod-conmon-35be96d21fe8398c75a5ebcc88db7b5c694b64f1a4c462efabedf1c8a0ed5e1b.scope.
Oct 11 04:50:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae186fce228440a654b26f921702830e2c44d5f5dda5fbb2ba918625a0d050d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae186fce228440a654b26f921702830e2c44d5f5dda5fbb2ba918625a0d050d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae186fce228440a654b26f921702830e2c44d5f5dda5fbb2ba918625a0d050d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae186fce228440a654b26f921702830e2c44d5f5dda5fbb2ba918625a0d050d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:50:52 compute-0 podman[265520]: 2025-10-11 04:50:52.722402528 +0000 UTC m=+0.099890961 container init 35be96d21fe8398c75a5ebcc88db7b5c694b64f1a4c462efabedf1c8a0ed5e1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_shamir, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 04:50:52 compute-0 podman[265520]: 2025-10-11 04:50:52.728222834 +0000 UTC m=+0.105711237 container start 35be96d21fe8398c75a5ebcc88db7b5c694b64f1a4c462efabedf1c8a0ed5e1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:50:52 compute-0 podman[265520]: 2025-10-11 04:50:52.73124961 +0000 UTC m=+0.108738053 container attach 35be96d21fe8398c75a5ebcc88db7b5c694b64f1a4c462efabedf1c8a0ed5e1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:50:52 compute-0 ceph-mon[74243]: pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:52 compute-0 podman[265520]: 2025-10-11 04:50:52.641804313 +0000 UTC m=+0.019292736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:50:53 compute-0 admiring_shamir[265537]: {
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:         "osd_id": 1,
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:         "type": "bluestore"
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:     },
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:         "osd_id": 0,
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:         "type": "bluestore"
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:     },
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:         "osd_id": 2,
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:         "type": "bluestore"
Oct 11 04:50:53 compute-0 admiring_shamir[265537]:     }
Oct 11 04:50:53 compute-0 admiring_shamir[265537]: }
Oct 11 04:50:53 compute-0 systemd[1]: libpod-35be96d21fe8398c75a5ebcc88db7b5c694b64f1a4c462efabedf1c8a0ed5e1b.scope: Deactivated successfully.
Oct 11 04:50:53 compute-0 podman[265520]: 2025-10-11 04:50:53.723446906 +0000 UTC m=+1.100935359 container died 35be96d21fe8398c75a5ebcc88db7b5c694b64f1a4c462efabedf1c8a0ed5e1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 04:50:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ae186fce228440a654b26f921702830e2c44d5f5dda5fbb2ba918625a0d050d-merged.mount: Deactivated successfully.
Oct 11 04:50:53 compute-0 podman[265520]: 2025-10-11 04:50:53.775698048 +0000 UTC m=+1.153186451 container remove 35be96d21fe8398c75a5ebcc88db7b5c694b64f1a4c462efabedf1c8a0ed5e1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:50:53 compute-0 systemd[1]: libpod-conmon-35be96d21fe8398c75a5ebcc88db7b5c694b64f1a4c462efabedf1c8a0ed5e1b.scope: Deactivated successfully.
Oct 11 04:50:53 compute-0 sudo[265416]: pam_unix(sudo:session): session closed for user root
Oct 11 04:50:53 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:50:53 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:50:53 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:50:53 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:50:53 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev afb31385-fb7d-49f1-aca4-aa732c4b0696 does not exist
Oct 11 04:50:53 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev cc61091d-2c76-4946-a77d-473db99236be does not exist
Oct 11 04:50:53 compute-0 sudo[265582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:50:53 compute-0 sudo[265582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:50:53 compute-0 sudo[265582]: pam_unix(sudo:session): session closed for user root
Oct 11 04:50:53 compute-0 sudo[265607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:50:53 compute-0 sudo[265607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:50:53 compute-0 sudo[265607]: pam_unix(sudo:session): session closed for user root
Oct 11 04:50:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:54 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:50:54 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:50:54 compute-0 ceph-mon[74243]: pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:50:56
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'backups', 'images', 'volumes', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', '.rgw.root']
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:50:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:57 compute-0 ceph-mon[74243]: pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:50:58 compute-0 ceph-mon[74243]: pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:51:00 compute-0 podman[265632]: 2025-10-11 04:51:00.451427745 +0000 UTC m=+0.097344596 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 04:51:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:01 compute-0 ceph-mon[74243]: pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:02 compute-0 ceph-mon[74243]: pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:51:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2353149711' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:51:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:51:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2353149711' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:51:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/2353149711' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:51:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/2353149711' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:51:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:04 compute-0 ceph-mon[74243]: pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:51:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:07 compute-0 ceph-mon[74243]: pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:08 compute-0 ceph-mon[74243]: pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:51:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:51:11.015 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:51:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:51:11.015 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:51:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:51:11.016 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:51:11 compute-0 nova_compute[259400]: 2025-10-11 04:51:11.198 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:51:11 compute-0 nova_compute[259400]: 2025-10-11 04:51:11.198 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 04:51:11 compute-0 nova_compute[259400]: 2025-10-11 04:51:11.226 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 04:51:11 compute-0 nova_compute[259400]: 2025-10-11 04:51:11.229 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:51:11 compute-0 nova_compute[259400]: 2025-10-11 04:51:11.229 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 04:51:11 compute-0 nova_compute[259400]: 2025-10-11 04:51:11.250 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:51:11 compute-0 ceph-mon[74243]: pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:12 compute-0 ceph-mon[74243]: pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:13 compute-0 nova_compute[259400]: 2025-10-11 04:51:13.265 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:51:13 compute-0 nova_compute[259400]: 2025-10-11 04:51:13.265 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:51:13 compute-0 nova_compute[259400]: 2025-10-11 04:51:13.265 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:51:13 compute-0 nova_compute[259400]: 2025-10-11 04:51:13.287 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:51:13 compute-0 nova_compute[259400]: 2025-10-11 04:51:13.288 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:51:13 compute-0 nova_compute[259400]: 2025-10-11 04:51:13.288 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:51:13 compute-0 nova_compute[259400]: 2025-10-11 04:51:13.289 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:51:13 compute-0 nova_compute[259400]: 2025-10-11 04:51:13.330 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:51:13 compute-0 nova_compute[259400]: 2025-10-11 04:51:13.330 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:51:13 compute-0 nova_compute[259400]: 2025-10-11 04:51:13.331 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:51:13 compute-0 nova_compute[259400]: 2025-10-11 04:51:13.331 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:51:13 compute-0 nova_compute[259400]: 2025-10-11 04:51:13.331 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:51:13 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:51:13 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2979217250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:51:13 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2979217250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:51:13 compute-0 nova_compute[259400]: 2025-10-11 04:51:13.752 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:51:13 compute-0 nova_compute[259400]: 2025-10-11 04:51:13.925 2 WARNING nova.virt.libvirt.driver [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:51:13 compute-0 nova_compute[259400]: 2025-10-11 04:51:13.927 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5176MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:51:13 compute-0 nova_compute[259400]: 2025-10-11 04:51:13.928 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:51:13 compute-0 nova_compute[259400]: 2025-10-11 04:51:13.928 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:51:14 compute-0 nova_compute[259400]: 2025-10-11 04:51:14.229 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:51:14 compute-0 nova_compute[259400]: 2025-10-11 04:51:14.230 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:51:14 compute-0 nova_compute[259400]: 2025-10-11 04:51:14.366 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:51:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:14 compute-0 ceph-mon[74243]: pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:51:14 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2745745091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:51:14 compute-0 nova_compute[259400]: 2025-10-11 04:51:14.828 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:51:14 compute-0 nova_compute[259400]: 2025-10-11 04:51:14.834 2 DEBUG nova.compute.provider_tree [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f05a244-23b6-4149-9b5a-a525e5860d18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:51:14 compute-0 nova_compute[259400]: 2025-10-11 04:51:14.850 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:51:14 compute-0 nova_compute[259400]: 2025-10-11 04:51:14.852 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:51:14 compute-0 nova_compute[259400]: 2025-10-11 04:51:14.852 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:51:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:51:15 compute-0 nova_compute[259400]: 2025-10-11 04:51:15.761 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:51:15 compute-0 nova_compute[259400]: 2025-10-11 04:51:15.762 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:51:15 compute-0 nova_compute[259400]: 2025-10-11 04:51:15.762 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:51:15 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2745745091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:51:16 compute-0 nova_compute[259400]: 2025-10-11 04:51:16.197 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:51:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:16 compute-0 ceph-mon[74243]: pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:17 compute-0 nova_compute[259400]: 2025-10-11 04:51:17.195 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:51:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:18 compute-0 ceph-mon[74243]: pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:19 compute-0 nova_compute[259400]: 2025-10-11 04:51:19.191 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:51:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:51:20 compute-0 podman[265696]: 2025-10-11 04:51:20.420768149 +0000 UTC m=+0.062871101 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=iscsid, io.buildah.version=1.41.3)
Oct 11 04:51:20 compute-0 podman[265697]: 2025-10-11 04:51:20.442172797 +0000 UTC m=+0.070593335 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 04:51:20 compute-0 podman[265695]: 2025-10-11 04:51:20.477299609 +0000 UTC m=+0.123777911 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 04:51:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:20 compute-0 ceph-mon[74243]: pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:22 compute-0 ceph-mon[74243]: pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:24 compute-0 ceph-mon[74243]: pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.058979) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158285059037, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2040, "num_deletes": 250, "total_data_size": 3432479, "memory_usage": 3491496, "flush_reason": "Manual Compaction"}
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158285073181, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1936557, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16287, "largest_seqno": 18326, "table_properties": {"data_size": 1930080, "index_size": 3359, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16500, "raw_average_key_size": 20, "raw_value_size": 1915595, "raw_average_value_size": 2350, "num_data_blocks": 156, "num_entries": 815, "num_filter_entries": 815, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760158055, "oldest_key_time": 1760158055, "file_creation_time": 1760158285, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 14259 microseconds, and 9073 cpu microseconds.
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.073239) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1936557 bytes OK
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.073263) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.075400) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.075419) EVENT_LOG_v1 {"time_micros": 1760158285075413, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.075442) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3423949, prev total WAL file size 3435260, number of live WAL files 2.
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.077258) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373531' seq:0, type:0; will stop at (end)
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1891KB)], [38(7597KB)]
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158285077300, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 9716581, "oldest_snapshot_seqno": -1}
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4390 keys, 7846062 bytes, temperature: kUnknown
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158285126791, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 7846062, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7815932, "index_size": 18052, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11013, "raw_key_size": 105858, "raw_average_key_size": 24, "raw_value_size": 7735833, "raw_average_value_size": 1762, "num_data_blocks": 770, "num_entries": 4390, "num_filter_entries": 4390, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156706, "oldest_key_time": 0, "file_creation_time": 1760158285, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.127126) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7846062 bytes
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.128875) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.9 rd, 158.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.4 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(9.1) write-amplify(4.1) OK, records in: 4793, records dropped: 403 output_compression: NoCompression
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.128906) EVENT_LOG_v1 {"time_micros": 1760158285128891, "job": 18, "event": "compaction_finished", "compaction_time_micros": 49589, "compaction_time_cpu_micros": 32193, "output_level": 6, "num_output_files": 1, "total_output_size": 7846062, "num_input_records": 4793, "num_output_records": 4390, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158285129701, "job": 18, "event": "table_file_deletion", "file_number": 40}
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158285132558, "job": 18, "event": "table_file_deletion", "file_number": 38}
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.077159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.132604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.132610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.132614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.132617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.132620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.133048) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158285133117, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 256, "num_deletes": 251, "total_data_size": 13330, "memory_usage": 19624, "flush_reason": "Manual Compaction"}
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158285135580, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 13302, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18327, "largest_seqno": 18582, "table_properties": {"data_size": 11549, "index_size": 50, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 4640, "raw_average_key_size": 18, "raw_value_size": 8177, "raw_average_value_size": 31, "num_data_blocks": 2, "num_entries": 256, "num_filter_entries": 256, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760158285, "oldest_key_time": 1760158285, "file_creation_time": 1760158285, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 2574 microseconds, and 1040 cpu microseconds.
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.135630) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 13302 bytes OK
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.135652) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.137426) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.137447) EVENT_LOG_v1 {"time_micros": 1760158285137440, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.137468) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 11311, prev total WAL file size 11311, number of live WAL files 2.
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.137925) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(12KB)], [41(7662KB)]
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158285137963, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 7859364, "oldest_snapshot_seqno": -1}
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4140 keys, 6103670 bytes, temperature: kUnknown
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158285180938, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6103670, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6076953, "index_size": 15280, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10373, "raw_key_size": 101358, "raw_average_key_size": 24, "raw_value_size": 6002904, "raw_average_value_size": 1449, "num_data_blocks": 644, "num_entries": 4140, "num_filter_entries": 4140, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156706, "oldest_key_time": 0, "file_creation_time": 1760158285, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.181142) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6103670 bytes
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.182274) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.5 rd, 141.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 7.5 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(1049.7) write-amplify(458.9) OK, records in: 4646, records dropped: 506 output_compression: NoCompression
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.182292) EVENT_LOG_v1 {"time_micros": 1760158285182283, "job": 20, "event": "compaction_finished", "compaction_time_micros": 43056, "compaction_time_cpu_micros": 29990, "output_level": 6, "num_output_files": 1, "total_output_size": 6103670, "num_input_records": 4646, "num_output_records": 4140, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158285182391, "job": 20, "event": "table_file_deletion", "file_number": 43}
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158285183459, "job": 20, "event": "table_file_deletion", "file_number": 41}
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.137868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.183545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.183553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.183556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.183559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:51:25 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:51:25.183562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:51:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:51:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:51:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:51:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:51:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:51:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:51:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:26 compute-0 ceph-mon[74243]: pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:28 compute-0 ceph-mon[74243]: pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:51:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:30 compute-0 ceph-mon[74243]: pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:31 compute-0 podman[265762]: 2025-10-11 04:51:31.42957456 +0000 UTC m=+0.082977166 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:51:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:32 compute-0 ceph-mon[74243]: pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:34 compute-0 ceph-mon[74243]: pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:51:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:36 compute-0 ceph-mon[74243]: pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:38 compute-0 ceph-mon[74243]: pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:51:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:40 compute-0 ceph-mon[74243]: pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:42 compute-0 ceph-mon[74243]: pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:44 compute-0 ceph-mon[74243]: pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:51:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:46 compute-0 ceph-mon[74243]: pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:48 compute-0 ceph-mon[74243]: pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:51:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:50 compute-0 ceph-mon[74243]: pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:51 compute-0 podman[265783]: 2025-10-11 04:51:51.451705988 +0000 UTC m=+0.090720180 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 04:51:51 compute-0 podman[265784]: 2025-10-11 04:51:51.460387316 +0000 UTC m=+0.093791007 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:51:51 compute-0 podman[265782]: 2025-10-11 04:51:51.494070172 +0000 UTC m=+0.138877630 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:51:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:52 compute-0 ceph-mon[74243]: pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:54 compute-0 sudo[265845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:51:54 compute-0 sudo[265845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:51:54 compute-0 sudo[265845]: pam_unix(sudo:session): session closed for user root
Oct 11 04:51:54 compute-0 sudo[265870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:51:54 compute-0 sudo[265870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:51:54 compute-0 sudo[265870]: pam_unix(sudo:session): session closed for user root
Oct 11 04:51:54 compute-0 sudo[265895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:51:54 compute-0 sudo[265895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:51:54 compute-0 sudo[265895]: pam_unix(sudo:session): session closed for user root
Oct 11 04:51:54 compute-0 sudo[265920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:51:54 compute-0 sudo[265920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:51:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:54 compute-0 ceph-mon[74243]: pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:54 compute-0 sudo[265920]: pam_unix(sudo:session): session closed for user root
Oct 11 04:51:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:51:54 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:51:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:51:54 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:51:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:51:54 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:51:54 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 94baa188-92ef-4674-b1bb-980ca1b6158c does not exist
Oct 11 04:51:54 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 2d4759ad-404f-4acd-91ef-677ab61256e9 does not exist
Oct 11 04:51:54 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev a8c02cbf-6e2b-4584-ae08-926049892217 does not exist
Oct 11 04:51:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:51:54 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:51:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:51:54 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:51:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:51:54 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:51:55 compute-0 sudo[265977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:51:55 compute-0 sudo[265977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:51:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:51:55 compute-0 sudo[265977]: pam_unix(sudo:session): session closed for user root
Oct 11 04:51:55 compute-0 sudo[266002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:51:55 compute-0 sudo[266002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:51:55 compute-0 sudo[266002]: pam_unix(sudo:session): session closed for user root
Oct 11 04:51:55 compute-0 sudo[266027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:51:55 compute-0 sudo[266027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:51:55 compute-0 sudo[266027]: pam_unix(sudo:session): session closed for user root
Oct 11 04:51:55 compute-0 sudo[266052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:51:55 compute-0 sudo[266052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:51:55 compute-0 podman[266118]: 2025-10-11 04:51:55.738790258 +0000 UTC m=+0.064530662 container create 6668c65ab728d56d707d3dfcaa8c5943aadf257cacc3ea905679219d9f7b9299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_perlman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:51:55 compute-0 systemd[1]: Started libpod-conmon-6668c65ab728d56d707d3dfcaa8c5943aadf257cacc3ea905679219d9f7b9299.scope.
Oct 11 04:51:55 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:51:55 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:51:55 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:51:55 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:51:55 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:51:55 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:51:55 compute-0 podman[266118]: 2025-10-11 04:51:55.712384704 +0000 UTC m=+0.038125168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:51:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:51:55 compute-0 podman[266118]: 2025-10-11 04:51:55.841479168 +0000 UTC m=+0.167219652 container init 6668c65ab728d56d707d3dfcaa8c5943aadf257cacc3ea905679219d9f7b9299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_perlman, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:51:55 compute-0 podman[266118]: 2025-10-11 04:51:55.852260198 +0000 UTC m=+0.178000592 container start 6668c65ab728d56d707d3dfcaa8c5943aadf257cacc3ea905679219d9f7b9299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_perlman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:51:55 compute-0 podman[266118]: 2025-10-11 04:51:55.855762936 +0000 UTC m=+0.181503370 container attach 6668c65ab728d56d707d3dfcaa8c5943aadf257cacc3ea905679219d9f7b9299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_perlman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:51:55 compute-0 reverent_perlman[266134]: 167 167
Oct 11 04:51:55 compute-0 systemd[1]: libpod-6668c65ab728d56d707d3dfcaa8c5943aadf257cacc3ea905679219d9f7b9299.scope: Deactivated successfully.
Oct 11 04:51:55 compute-0 podman[266118]: 2025-10-11 04:51:55.862111036 +0000 UTC m=+0.187851450 container died 6668c65ab728d56d707d3dfcaa8c5943aadf257cacc3ea905679219d9f7b9299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_perlman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:51:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-25deb04729d289d14d0e6cd7fd1b8e09194849e9b8d0e72aaa1533c98e8b5c87-merged.mount: Deactivated successfully.
Oct 11 04:51:55 compute-0 podman[266118]: 2025-10-11 04:51:55.922555724 +0000 UTC m=+0.248296158 container remove 6668c65ab728d56d707d3dfcaa8c5943aadf257cacc3ea905679219d9f7b9299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_perlman, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:51:55 compute-0 systemd[1]: libpod-conmon-6668c65ab728d56d707d3dfcaa8c5943aadf257cacc3ea905679219d9f7b9299.scope: Deactivated successfully.
Oct 11 04:51:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Oct 11 04:51:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Oct 11 04:51:55 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:51:56
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'volumes', '.mgr', 'default.rgw.log', 'vms', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'images']
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:51:56 compute-0 podman[266157]: 2025-10-11 04:51:56.173831987 +0000 UTC m=+0.067607769 container create ad1941f950cb81b54116a8358147540a0faad799f309c872d4c35d8e288efef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_thompson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:51:56 compute-0 systemd[1]: Started libpod-conmon-ad1941f950cb81b54116a8358147540a0faad799f309c872d4c35d8e288efef9.scope.
Oct 11 04:51:56 compute-0 podman[266157]: 2025-10-11 04:51:56.149238329 +0000 UTC m=+0.043014191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:51:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c3c754d93196a533b69ddf023451bb75fc654dbe7c87592eb9fd57144de72d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c3c754d93196a533b69ddf023451bb75fc654dbe7c87592eb9fd57144de72d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c3c754d93196a533b69ddf023451bb75fc654dbe7c87592eb9fd57144de72d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c3c754d93196a533b69ddf023451bb75fc654dbe7c87592eb9fd57144de72d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c3c754d93196a533b69ddf023451bb75fc654dbe7c87592eb9fd57144de72d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:56 compute-0 podman[266157]: 2025-10-11 04:51:56.286052446 +0000 UTC m=+0.179828268 container init ad1941f950cb81b54116a8358147540a0faad799f309c872d4c35d8e288efef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_thompson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:51:56 compute-0 podman[266157]: 2025-10-11 04:51:56.303950786 +0000 UTC m=+0.197726558 container start ad1941f950cb81b54116a8358147540a0faad799f309c872d4c35d8e288efef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:51:56 compute-0 podman[266157]: 2025-10-11 04:51:56.306872739 +0000 UTC m=+0.200648511 container attach ad1941f950cb81b54116a8358147540a0faad799f309c872d4c35d8e288efef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:51:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Oct 11 04:51:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Oct 11 04:51:56 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Oct 11 04:51:56 compute-0 ceph-mon[74243]: osdmap e115: 3 total, 3 up, 3 in
Oct 11 04:51:56 compute-0 ceph-mon[74243]: pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:51:57 compute-0 heuristic_thompson[266174]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:51:57 compute-0 heuristic_thompson[266174]: --> relative data size: 1.0
Oct 11 04:51:57 compute-0 heuristic_thompson[266174]: --> All data devices are unavailable
Oct 11 04:51:57 compute-0 systemd[1]: libpod-ad1941f950cb81b54116a8358147540a0faad799f309c872d4c35d8e288efef9.scope: Deactivated successfully.
Oct 11 04:51:57 compute-0 podman[266157]: 2025-10-11 04:51:57.35179719 +0000 UTC m=+1.245573002 container died ad1941f950cb81b54116a8358147540a0faad799f309c872d4c35d8e288efef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_thompson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:51:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-97c3c754d93196a533b69ddf023451bb75fc654dbe7c87592eb9fd57144de72d-merged.mount: Deactivated successfully.
Oct 11 04:51:57 compute-0 podman[266157]: 2025-10-11 04:51:57.420861385 +0000 UTC m=+1.314637167 container remove ad1941f950cb81b54116a8358147540a0faad799f309c872d4c35d8e288efef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_thompson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:51:57 compute-0 systemd[1]: libpod-conmon-ad1941f950cb81b54116a8358147540a0faad799f309c872d4c35d8e288efef9.scope: Deactivated successfully.
Oct 11 04:51:57 compute-0 sudo[266052]: pam_unix(sudo:session): session closed for user root
Oct 11 04:51:57 compute-0 sudo[266213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:51:57 compute-0 sudo[266213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:51:57 compute-0 sudo[266213]: pam_unix(sudo:session): session closed for user root
Oct 11 04:51:57 compute-0 sudo[266238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:51:57 compute-0 sudo[266238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:51:57 compute-0 sudo[266238]: pam_unix(sudo:session): session closed for user root
Oct 11 04:51:57 compute-0 sudo[266263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:51:57 compute-0 sudo[266263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:51:57 compute-0 sudo[266263]: pam_unix(sudo:session): session closed for user root
Oct 11 04:51:57 compute-0 sudo[266288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:51:57 compute-0 sudo[266288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:51:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Oct 11 04:51:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Oct 11 04:51:57 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Oct 11 04:51:58 compute-0 ceph-mon[74243]: osdmap e116: 3 total, 3 up, 3 in
Oct 11 04:51:58 compute-0 podman[266354]: 2025-10-11 04:51:58.129134778 +0000 UTC m=+0.054943411 container create c82921bfc18d92d43993e17a9810cd6c06b0124f7abe7cf2cc223a2e03cec4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:51:58 compute-0 systemd[1]: Started libpod-conmon-c82921bfc18d92d43993e17a9810cd6c06b0124f7abe7cf2cc223a2e03cec4e5.scope.
Oct 11 04:51:58 compute-0 podman[266354]: 2025-10-11 04:51:58.098728644 +0000 UTC m=+0.024537327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:51:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:51:58 compute-0 podman[266354]: 2025-10-11 04:51:58.208437509 +0000 UTC m=+0.134246162 container init c82921bfc18d92d43993e17a9810cd6c06b0124f7abe7cf2cc223a2e03cec4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_haslett, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct 11 04:51:58 compute-0 podman[266354]: 2025-10-11 04:51:58.214700617 +0000 UTC m=+0.140509240 container start c82921bfc18d92d43993e17a9810cd6c06b0124f7abe7cf2cc223a2e03cec4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:51:58 compute-0 podman[266354]: 2025-10-11 04:51:58.218589324 +0000 UTC m=+0.144397947 container attach c82921bfc18d92d43993e17a9810cd6c06b0124f7abe7cf2cc223a2e03cec4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_haslett, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:51:58 compute-0 trusting_haslett[266370]: 167 167
Oct 11 04:51:58 compute-0 systemd[1]: libpod-c82921bfc18d92d43993e17a9810cd6c06b0124f7abe7cf2cc223a2e03cec4e5.scope: Deactivated successfully.
Oct 11 04:51:58 compute-0 podman[266354]: 2025-10-11 04:51:58.223605851 +0000 UTC m=+0.149414444 container died c82921bfc18d92d43993e17a9810cd6c06b0124f7abe7cf2cc223a2e03cec4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_haslett, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:51:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4cecdd1b3a2ec76978d841e49531980b3d78489cb8ec259d487a815fcf500fc-merged.mount: Deactivated successfully.
Oct 11 04:51:58 compute-0 podman[266354]: 2025-10-11 04:51:58.26499308 +0000 UTC m=+0.190801673 container remove c82921bfc18d92d43993e17a9810cd6c06b0124f7abe7cf2cc223a2e03cec4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:51:58 compute-0 systemd[1]: libpod-conmon-c82921bfc18d92d43993e17a9810cd6c06b0124f7abe7cf2cc223a2e03cec4e5.scope: Deactivated successfully.
Oct 11 04:51:58 compute-0 podman[266392]: 2025-10-11 04:51:58.449665249 +0000 UTC m=+0.042581220 container create 45361b211fdb2949f7f44c920ae09d372bcc84196c956e1244284396203a7c0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dewdney, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:51:58 compute-0 systemd[1]: Started libpod-conmon-45361b211fdb2949f7f44c920ae09d372bcc84196c956e1244284396203a7c0c.scope.
Oct 11 04:51:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55fa84c1ca9243836b553d87b92277a14b5e8a08807c93b6e3dcb6da327e0784/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55fa84c1ca9243836b553d87b92277a14b5e8a08807c93b6e3dcb6da327e0784/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55fa84c1ca9243836b553d87b92277a14b5e8a08807c93b6e3dcb6da327e0784/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55fa84c1ca9243836b553d87b92277a14b5e8a08807c93b6e3dcb6da327e0784/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:58 compute-0 podman[266392]: 2025-10-11 04:51:58.52809495 +0000 UTC m=+0.121010951 container init 45361b211fdb2949f7f44c920ae09d372bcc84196c956e1244284396203a7c0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dewdney, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:51:58 compute-0 podman[266392]: 2025-10-11 04:51:58.433638157 +0000 UTC m=+0.026554148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:51:58 compute-0 podman[266392]: 2025-10-11 04:51:58.539124287 +0000 UTC m=+0.132040268 container start 45361b211fdb2949f7f44c920ae09d372bcc84196c956e1244284396203a7c0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dewdney, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:51:58 compute-0 podman[266392]: 2025-10-11 04:51:58.542944993 +0000 UTC m=+0.135861004 container attach 45361b211fdb2949f7f44c920ae09d372bcc84196c956e1244284396203a7c0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:51:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 13 MiB data, 161 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 2.1 MiB/s wr, 1 op/s
Oct 11 04:51:59 compute-0 ceph-mon[74243]: osdmap e117: 3 total, 3 up, 3 in
Oct 11 04:51:59 compute-0 ceph-mon[74243]: pgmap v889: 305 pgs: 305 active+clean; 13 MiB data, 161 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 2.1 MiB/s wr, 1 op/s
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]: {
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:     "0": [
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:         {
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "devices": [
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "/dev/loop3"
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             ],
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "lv_name": "ceph_lv0",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "lv_size": "21470642176",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "name": "ceph_lv0",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "tags": {
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.cluster_name": "ceph",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.crush_device_class": "",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.encrypted": "0",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.osd_id": "0",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.type": "block",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.vdo": "0"
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             },
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "type": "block",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "vg_name": "ceph_vg0"
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:         }
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:     ],
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:     "1": [
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:         {
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "devices": [
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "/dev/loop4"
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             ],
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "lv_name": "ceph_lv1",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "lv_size": "21470642176",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "name": "ceph_lv1",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "tags": {
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.cluster_name": "ceph",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.crush_device_class": "",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.encrypted": "0",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.osd_id": "1",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.type": "block",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.vdo": "0"
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             },
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "type": "block",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "vg_name": "ceph_vg1"
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:         }
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:     ],
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:     "2": [
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:         {
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "devices": [
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "/dev/loop5"
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             ],
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "lv_name": "ceph_lv2",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "lv_size": "21470642176",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "name": "ceph_lv2",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "tags": {
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.cluster_name": "ceph",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.crush_device_class": "",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.encrypted": "0",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.osd_id": "2",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.type": "block",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:                 "ceph.vdo": "0"
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             },
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "type": "block",
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:             "vg_name": "ceph_vg2"
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:         }
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]:     ]
Oct 11 04:51:59 compute-0 naughty_dewdney[266409]: }
Oct 11 04:51:59 compute-0 systemd[1]: libpod-45361b211fdb2949f7f44c920ae09d372bcc84196c956e1244284396203a7c0c.scope: Deactivated successfully.
Oct 11 04:51:59 compute-0 podman[266392]: 2025-10-11 04:51:59.394320511 +0000 UTC m=+0.987236492 container died 45361b211fdb2949f7f44c920ae09d372bcc84196c956e1244284396203a7c0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dewdney, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:51:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-55fa84c1ca9243836b553d87b92277a14b5e8a08807c93b6e3dcb6da327e0784-merged.mount: Deactivated successfully.
Oct 11 04:51:59 compute-0 podman[266392]: 2025-10-11 04:51:59.458669387 +0000 UTC m=+1.051585398 container remove 45361b211fdb2949f7f44c920ae09d372bcc84196c956e1244284396203a7c0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dewdney, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:51:59 compute-0 systemd[1]: libpod-conmon-45361b211fdb2949f7f44c920ae09d372bcc84196c956e1244284396203a7c0c.scope: Deactivated successfully.
Oct 11 04:51:59 compute-0 sudo[266288]: pam_unix(sudo:session): session closed for user root
Oct 11 04:51:59 compute-0 sudo[266429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:51:59 compute-0 sudo[266429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:51:59 compute-0 sudo[266429]: pam_unix(sudo:session): session closed for user root
Oct 11 04:51:59 compute-0 sudo[266454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:51:59 compute-0 sudo[266454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:51:59 compute-0 sudo[266454]: pam_unix(sudo:session): session closed for user root
Oct 11 04:51:59 compute-0 sudo[266479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:51:59 compute-0 sudo[266479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:51:59 compute-0 sudo[266479]: pam_unix(sudo:session): session closed for user root
Oct 11 04:51:59 compute-0 sudo[266504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:51:59 compute-0 sudo[266504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:52:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Oct 11 04:52:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Oct 11 04:52:00 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Oct 11 04:52:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:52:00 compute-0 podman[266569]: 2025-10-11 04:52:00.127068208 +0000 UTC m=+0.046985551 container create 506bf48dbc10480e634357a0d40c7d0e1dc113f79bca12424d9cfd71c5cf3101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:52:00 compute-0 systemd[1]: Started libpod-conmon-506bf48dbc10480e634357a0d40c7d0e1dc113f79bca12424d9cfd71c5cf3101.scope.
Oct 11 04:52:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:52:00 compute-0 podman[266569]: 2025-10-11 04:52:00.106232605 +0000 UTC m=+0.026149948 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:52:00 compute-0 podman[266569]: 2025-10-11 04:52:00.218010753 +0000 UTC m=+0.137928106 container init 506bf48dbc10480e634357a0d40c7d0e1dc113f79bca12424d9cfd71c5cf3101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:52:00 compute-0 podman[266569]: 2025-10-11 04:52:00.224485246 +0000 UTC m=+0.144402569 container start 506bf48dbc10480e634357a0d40c7d0e1dc113f79bca12424d9cfd71c5cf3101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:52:00 compute-0 podman[266569]: 2025-10-11 04:52:00.227715947 +0000 UTC m=+0.147633290 container attach 506bf48dbc10480e634357a0d40c7d0e1dc113f79bca12424d9cfd71c5cf3101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swanson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:52:00 compute-0 naughty_swanson[266585]: 167 167
Oct 11 04:52:00 compute-0 systemd[1]: libpod-506bf48dbc10480e634357a0d40c7d0e1dc113f79bca12424d9cfd71c5cf3101.scope: Deactivated successfully.
Oct 11 04:52:00 compute-0 podman[266569]: 2025-10-11 04:52:00.230390124 +0000 UTC m=+0.150307447 container died 506bf48dbc10480e634357a0d40c7d0e1dc113f79bca12424d9cfd71c5cf3101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:52:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cc98bf447a33ca3d0cfe82bd5f9acfec1850c98ac6407e908840c4226f729d7-merged.mount: Deactivated successfully.
Oct 11 04:52:00 compute-0 podman[266569]: 2025-10-11 04:52:00.278105113 +0000 UTC m=+0.198022416 container remove 506bf48dbc10480e634357a0d40c7d0e1dc113f79bca12424d9cfd71c5cf3101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swanson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:52:00 compute-0 systemd[1]: libpod-conmon-506bf48dbc10480e634357a0d40c7d0e1dc113f79bca12424d9cfd71c5cf3101.scope: Deactivated successfully.
Oct 11 04:52:00 compute-0 podman[266610]: 2025-10-11 04:52:00.480931858 +0000 UTC m=+0.047510684 container create 36f4b4d752116e3de1d74c4eba2eca15a0c872aca8ded1634723794a4076c53c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lalande, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 04:52:00 compute-0 systemd[1]: Started libpod-conmon-36f4b4d752116e3de1d74c4eba2eca15a0c872aca8ded1634723794a4076c53c.scope.
Oct 11 04:52:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adccd51ea0ef8608376d1cedeb04e73707d36c55b588dbe5da260960ca465e4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:52:00 compute-0 podman[266610]: 2025-10-11 04:52:00.461215223 +0000 UTC m=+0.027794089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adccd51ea0ef8608376d1cedeb04e73707d36c55b588dbe5da260960ca465e4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adccd51ea0ef8608376d1cedeb04e73707d36c55b588dbe5da260960ca465e4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adccd51ea0ef8608376d1cedeb04e73707d36c55b588dbe5da260960ca465e4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:52:00 compute-0 podman[266610]: 2025-10-11 04:52:00.571068463 +0000 UTC m=+0.137647279 container init 36f4b4d752116e3de1d74c4eba2eca15a0c872aca8ded1634723794a4076c53c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lalande, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:52:00 compute-0 podman[266610]: 2025-10-11 04:52:00.581456404 +0000 UTC m=+0.148035260 container start 36f4b4d752116e3de1d74c4eba2eca15a0c872aca8ded1634723794a4076c53c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:52:00 compute-0 podman[266610]: 2025-10-11 04:52:00.58610745 +0000 UTC m=+0.152686276 container attach 36f4b4d752116e3de1d74c4eba2eca15a0c872aca8ded1634723794a4076c53c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 04:52:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 13 MiB data, 161 MiB used, 60 GiB / 60 GiB avail; 433 B/s rd, 2.6 MiB/s wr, 1 op/s
Oct 11 04:52:01 compute-0 ceph-mon[74243]: osdmap e118: 3 total, 3 up, 3 in
Oct 11 04:52:01 compute-0 ceph-mon[74243]: pgmap v891: 305 pgs: 305 active+clean; 13 MiB data, 161 MiB used, 60 GiB / 60 GiB avail; 433 B/s rd, 2.6 MiB/s wr, 1 op/s
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]: {
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:         "osd_id": 1,
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:         "type": "bluestore"
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:     },
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:         "osd_id": 0,
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:         "type": "bluestore"
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:     },
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:         "osd_id": 2,
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:         "type": "bluestore"
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]:     }
Oct 11 04:52:01 compute-0 beautiful_lalande[266627]: }
Oct 11 04:52:01 compute-0 systemd[1]: libpod-36f4b4d752116e3de1d74c4eba2eca15a0c872aca8ded1634723794a4076c53c.scope: Deactivated successfully.
Oct 11 04:52:01 compute-0 podman[266610]: 2025-10-11 04:52:01.733209438 +0000 UTC m=+1.299788264 container died 36f4b4d752116e3de1d74c4eba2eca15a0c872aca8ded1634723794a4076c53c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lalande, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:52:01 compute-0 systemd[1]: libpod-36f4b4d752116e3de1d74c4eba2eca15a0c872aca8ded1634723794a4076c53c.scope: Consumed 1.157s CPU time.
Oct 11 04:52:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-adccd51ea0ef8608376d1cedeb04e73707d36c55b588dbe5da260960ca465e4b-merged.mount: Deactivated successfully.
Oct 11 04:52:01 compute-0 podman[266610]: 2025-10-11 04:52:01.818955491 +0000 UTC m=+1.385534347 container remove 36f4b4d752116e3de1d74c4eba2eca15a0c872aca8ded1634723794a4076c53c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lalande, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:52:01 compute-0 systemd[1]: libpod-conmon-36f4b4d752116e3de1d74c4eba2eca15a0c872aca8ded1634723794a4076c53c.scope: Deactivated successfully.
Oct 11 04:52:01 compute-0 sudo[266504]: pam_unix(sudo:session): session closed for user root
Oct 11 04:52:01 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:52:01 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:52:01 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:52:01 compute-0 podman[266662]: 2025-10-11 04:52:01.895620577 +0000 UTC m=+0.117928733 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 04:52:01 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:52:01 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 778b5bb5-3aa8-4ab3-b027-57286e6949f5 does not exist
Oct 11 04:52:01 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev c70adcdd-0917-41f4-9d04-e4724812b7bc does not exist
Oct 11 04:52:01 compute-0 sudo[266693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:52:01 compute-0 sudo[266693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:52:01 compute-0 sudo[266693]: pam_unix(sudo:session): session closed for user root
Oct 11 04:52:02 compute-0 sudo[266718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:52:02 compute-0 sudo[266718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:52:02 compute-0 sudo[266718]: pam_unix(sudo:session): session closed for user root
Oct 11 04:52:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 33 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.5 MiB/s wr, 42 op/s
Oct 11 04:52:02 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:52:02 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:52:02 compute-0 ceph-mon[74243]: pgmap v892: 305 pgs: 305 active+clean; 33 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.5 MiB/s wr, 42 op/s
Oct 11 04:52:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:52:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3255719964' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:52:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:52:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3255719964' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:52:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/3255719964' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:52:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/3255719964' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:52:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.3 MiB/s wr, 49 op/s
Oct 11 04:52:04 compute-0 ceph-mon[74243]: pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.3 MiB/s wr, 49 op/s
Oct 11 04:52:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:52:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Oct 11 04:52:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Oct 11 04:52:05 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Oct 11 04:52:06 compute-0 ceph-mon[74243]: osdmap e119: 3 total, 3 up, 3 in
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:52:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 3.6 MiB/s wr, 46 op/s
Oct 11 04:52:07 compute-0 ceph-mon[74243]: pgmap v895: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 3.6 MiB/s wr, 46 op/s
Oct 11 04:52:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.3 MiB/s wr, 43 op/s
Oct 11 04:52:08 compute-0 ceph-mon[74243]: pgmap v896: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.3 MiB/s wr, 43 op/s
Oct 11 04:52:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:52:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.8 MiB/s wr, 37 op/s
Oct 11 04:52:10 compute-0 ceph-mon[74243]: pgmap v897: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.8 MiB/s wr, 37 op/s
Oct 11 04:52:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:52:11.016 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:52:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:52:11.017 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:52:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:52:11.017 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:52:12 compute-0 nova_compute[259400]: 2025-10-11 04:52:12.191 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:52:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 9.8 KiB/s rd, 820 KiB/s wr, 12 op/s
Oct 11 04:52:12 compute-0 ceph-mon[74243]: pgmap v898: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 9.8 KiB/s rd, 820 KiB/s wr, 12 op/s
Oct 11 04:52:13 compute-0 nova_compute[259400]: 2025-10-11 04:52:13.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:52:13 compute-0 nova_compute[259400]: 2025-10-11 04:52:13.197 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:52:13 compute-0 nova_compute[259400]: 2025-10-11 04:52:13.197 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:52:13 compute-0 nova_compute[259400]: 2025-10-11 04:52:13.219 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:52:14 compute-0 nova_compute[259400]: 2025-10-11 04:52:14.195 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:52:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:14 compute-0 ceph-mon[74243]: pgmap v899: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:52:15 compute-0 nova_compute[259400]: 2025-10-11 04:52:15.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:52:15 compute-0 nova_compute[259400]: 2025-10-11 04:52:15.197 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:52:15 compute-0 nova_compute[259400]: 2025-10-11 04:52:15.197 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:52:15 compute-0 nova_compute[259400]: 2025-10-11 04:52:15.198 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:52:15 compute-0 nova_compute[259400]: 2025-10-11 04:52:15.240 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:52:15 compute-0 nova_compute[259400]: 2025-10-11 04:52:15.240 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:52:15 compute-0 nova_compute[259400]: 2025-10-11 04:52:15.241 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:52:15 compute-0 nova_compute[259400]: 2025-10-11 04:52:15.241 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:52:15 compute-0 nova_compute[259400]: 2025-10-11 04:52:15.242 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:52:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:52:15 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2846818257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:52:15 compute-0 nova_compute[259400]: 2025-10-11 04:52:15.683 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:52:15 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2846818257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:52:15 compute-0 nova_compute[259400]: 2025-10-11 04:52:15.913 2 WARNING nova.virt.libvirt.driver [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:52:15 compute-0 nova_compute[259400]: 2025-10-11 04:52:15.914 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5172MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:52:15 compute-0 nova_compute[259400]: 2025-10-11 04:52:15.915 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:52:15 compute-0 nova_compute[259400]: 2025-10-11 04:52:15.915 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:52:16 compute-0 nova_compute[259400]: 2025-10-11 04:52:16.091 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:52:16 compute-0 nova_compute[259400]: 2025-10-11 04:52:16.091 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:52:16 compute-0 nova_compute[259400]: 2025-10-11 04:52:16.126 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Refreshing inventories for resource provider 1f05a244-23b6-4149-9b5a-a525e5860d18 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 04:52:16 compute-0 nova_compute[259400]: 2025-10-11 04:52:16.167 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Updating ProviderTree inventory for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 04:52:16 compute-0 nova_compute[259400]: 2025-10-11 04:52:16.168 2 DEBUG nova.compute.provider_tree [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Updating inventory in ProviderTree for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 04:52:16 compute-0 nova_compute[259400]: 2025-10-11 04:52:16.185 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Refreshing aggregate associations for resource provider 1f05a244-23b6-4149-9b5a-a525e5860d18, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 04:52:16 compute-0 nova_compute[259400]: 2025-10-11 04:52:16.217 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Refreshing trait associations for resource provider 1f05a244-23b6-4149-9b5a-a525e5860d18, traits: COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_NODE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE42,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,HW_CPU_X86_AVX,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 04:52:16 compute-0 nova_compute[259400]: 2025-10-11 04:52:16.254 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:52:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:52:16 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2473143871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:52:16 compute-0 nova_compute[259400]: 2025-10-11 04:52:16.744 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:52:16 compute-0 nova_compute[259400]: 2025-10-11 04:52:16.751 2 DEBUG nova.compute.provider_tree [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f05a244-23b6-4149-9b5a-a525e5860d18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:52:16 compute-0 nova_compute[259400]: 2025-10-11 04:52:16.773 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:52:16 compute-0 nova_compute[259400]: 2025-10-11 04:52:16.776 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:52:16 compute-0 nova_compute[259400]: 2025-10-11 04:52:16.777 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:52:16 compute-0 ceph-mon[74243]: pgmap v900: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:16 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2473143871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:52:17 compute-0 nova_compute[259400]: 2025-10-11 04:52:17.777 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:52:17 compute-0 nova_compute[259400]: 2025-10-11 04:52:17.778 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:52:18 compute-0 nova_compute[259400]: 2025-10-11 04:52:18.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:52:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:18 compute-0 ceph-mon[74243]: pgmap v901: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:19 compute-0 nova_compute[259400]: 2025-10-11 04:52:19.191 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:52:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:52:20 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:52:20.472 161813 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:88:88', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '96:43:b2:79:d5:95'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:52:20 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:52:20.474 161813 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:52:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:20 compute-0 ceph-mon[74243]: pgmap v902: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:21 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:52:21.475 161813 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2ff6420e-86e1-487c-bef9-adac80b75ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:52:22 compute-0 podman[266789]: 2025-10-11 04:52:22.437868684 +0000 UTC m=+0.073722533 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 04:52:22 compute-0 podman[266788]: 2025-10-11 04:52:22.457970739 +0000 UTC m=+0.097310366 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:52:22 compute-0 podman[266787]: 2025-10-11 04:52:22.505018501 +0000 UTC m=+0.144649315 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Oct 11 04:52:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:22 compute-0 ceph-mon[74243]: pgmap v903: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:24 compute-0 ceph-mon[74243]: pgmap v904: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:52:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:52:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:52:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:52:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:52:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:52:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:52:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:26 compute-0 ceph-mon[74243]: pgmap v905: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:28 compute-0 ceph-mon[74243]: pgmap v906: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:52:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:30 compute-0 ceph-mon[74243]: pgmap v907: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:31 compute-0 sshd-session[266850]: Connection reset by 198.235.24.167 port 62414 [preauth]
Oct 11 04:52:32 compute-0 podman[266852]: 2025-10-11 04:52:32.43624879 +0000 UTC m=+0.077795325 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 11 04:52:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:32 compute-0 ceph-mon[74243]: pgmap v908: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Oct 11 04:52:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Oct 11 04:52:34 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Oct 11 04:52:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:52:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Oct 11 04:52:35 compute-0 ceph-mon[74243]: osdmap e120: 3 total, 3 up, 3 in
Oct 11 04:52:35 compute-0 ceph-mon[74243]: pgmap v910: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:52:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Oct 11 04:52:35 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Oct 11 04:52:36 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Oct 11 04:52:36 compute-0 ceph-mon[74243]: osdmap e121: 3 total, 3 up, 3 in
Oct 11 04:52:36 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Oct 11 04:52:36 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Oct 11 04:52:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.3 KiB/s wr, 17 op/s
Oct 11 04:52:37 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Oct 11 04:52:37 compute-0 ceph-mon[74243]: osdmap e122: 3 total, 3 up, 3 in
Oct 11 04:52:37 compute-0 ceph-mon[74243]: pgmap v913: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.3 KiB/s wr, 17 op/s
Oct 11 04:52:37 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Oct 11 04:52:37 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Oct 11 04:52:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Oct 11 04:52:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Oct 11 04:52:38 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Oct 11 04:52:38 compute-0 ceph-mon[74243]: osdmap e123: 3 total, 3 up, 3 in
Oct 11 04:52:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 13 KiB/s wr, 114 op/s
Oct 11 04:52:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Oct 11 04:52:39 compute-0 ceph-mon[74243]: osdmap e124: 3 total, 3 up, 3 in
Oct 11 04:52:39 compute-0 ceph-mon[74243]: pgmap v916: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 13 KiB/s wr, 114 op/s
Oct 11 04:52:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Oct 11 04:52:39 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Oct 11 04:52:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:52:40 compute-0 ceph-mon[74243]: osdmap e125: 3 total, 3 up, 3 in
Oct 11 04:52:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 13 KiB/s wr, 107 op/s
Oct 11 04:52:41 compute-0 ceph-mon[74243]: pgmap v918: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 13 KiB/s wr, 107 op/s
Oct 11 04:52:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 15 KiB/s wr, 130 op/s
Oct 11 04:52:42 compute-0 ceph-mon[74243]: pgmap v919: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 15 KiB/s wr, 130 op/s
Oct 11 04:52:43 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Oct 11 04:52:43 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Oct 11 04:52:43 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Oct 11 04:52:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 8.6 KiB/s wr, 82 op/s
Oct 11 04:52:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Oct 11 04:52:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Oct 11 04:52:44 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Oct 11 04:52:44 compute-0 ceph-mon[74243]: osdmap e126: 3 total, 3 up, 3 in
Oct 11 04:52:44 compute-0 ceph-mon[74243]: pgmap v921: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 8.6 KiB/s wr, 82 op/s
Oct 11 04:52:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:52:45.097396) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158365097492, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1034, "num_deletes": 259, "total_data_size": 1386853, "memory_usage": 1419296, "flush_reason": "Manual Compaction"}
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158365109453, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1372137, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18583, "largest_seqno": 19616, "table_properties": {"data_size": 1367005, "index_size": 2656, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10592, "raw_average_key_size": 18, "raw_value_size": 1356602, "raw_average_value_size": 2422, "num_data_blocks": 119, "num_entries": 560, "num_filter_entries": 560, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760158286, "oldest_key_time": 1760158286, "file_creation_time": 1760158365, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 12103 microseconds, and 7754 cpu microseconds.
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:52:45.109509) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1372137 bytes OK
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:52:45.109534) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:52:45.111439) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:52:45.111463) EVENT_LOG_v1 {"time_micros": 1760158365111455, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:52:45.111483) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1381912, prev total WAL file size 1381912, number of live WAL files 2.
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:52:45.112553) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353035' seq:0, type:0; will stop at (end)
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1339KB)], [44(5960KB)]
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158365112642, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7475807, "oldest_snapshot_seqno": -1}
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4167 keys, 7353908 bytes, temperature: kUnknown
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158365161416, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7353908, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7324918, "index_size": 17488, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 103099, "raw_average_key_size": 24, "raw_value_size": 7248251, "raw_average_value_size": 1739, "num_data_blocks": 735, "num_entries": 4167, "num_filter_entries": 4167, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156706, "oldest_key_time": 0, "file_creation_time": 1760158365, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:52:45.161770) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7353908 bytes
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:52:45.163178) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.8 rd, 150.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 5.8 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(10.8) write-amplify(5.4) OK, records in: 4700, records dropped: 533 output_compression: NoCompression
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:52:45.163211) EVENT_LOG_v1 {"time_micros": 1760158365163194, "job": 22, "event": "compaction_finished", "compaction_time_micros": 48929, "compaction_time_cpu_micros": 31638, "output_level": 6, "num_output_files": 1, "total_output_size": 7353908, "num_input_records": 4700, "num_output_records": 4167, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158365164006, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158365166484, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:52:45.112374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:52:45.166613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:52:45.166619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:52:45.166623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:52:45.166626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:52:45 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:52:45.166629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:52:45 compute-0 ceph-mon[74243]: osdmap e127: 3 total, 3 up, 3 in
Oct 11 04:52:46 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Oct 11 04:52:46 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Oct 11 04:52:46 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Oct 11 04:52:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 13 KiB/s wr, 102 op/s
Oct 11 04:52:47 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Oct 11 04:52:47 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Oct 11 04:52:47 compute-0 ceph-mon[74243]: osdmap e128: 3 total, 3 up, 3 in
Oct 11 04:52:47 compute-0 ceph-mon[74243]: pgmap v924: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 13 KiB/s wr, 102 op/s
Oct 11 04:52:47 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Oct 11 04:52:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Oct 11 04:52:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Oct 11 04:52:48 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Oct 11 04:52:48 compute-0 ceph-mon[74243]: osdmap e129: 3 total, 3 up, 3 in
Oct 11 04:52:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 65 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 230 KiB/s rd, 6.0 MiB/s wr, 326 op/s
Oct 11 04:52:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct 11 04:52:49 compute-0 ceph-mon[74243]: osdmap e130: 3 total, 3 up, 3 in
Oct 11 04:52:49 compute-0 ceph-mon[74243]: pgmap v927: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 65 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 230 KiB/s rd, 6.0 MiB/s wr, 326 op/s
Oct 11 04:52:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct 11 04:52:49 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct 11 04:52:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:52:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct 11 04:52:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct 11 04:52:50 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct 11 04:52:50 compute-0 ceph-mon[74243]: osdmap e131: 3 total, 3 up, 3 in
Oct 11 04:52:50 compute-0 ceph-mon[74243]: osdmap e132: 3 total, 3 up, 3 in
Oct 11 04:52:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 65 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 218 KiB/s rd, 6.0 MiB/s wr, 300 op/s
Oct 11 04:52:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct 11 04:52:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct 11 04:52:51 compute-0 ceph-mon[74243]: pgmap v930: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 65 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 218 KiB/s rd, 6.0 MiB/s wr, 300 op/s
Oct 11 04:52:51 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct 11 04:52:52 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct 11 04:52:52 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct 11 04:52:52 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct 11 04:52:52 compute-0 ceph-mon[74243]: osdmap e133: 3 total, 3 up, 3 in
Oct 11 04:52:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 65 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 194 KiB/s rd, 18 MiB/s wr, 267 op/s
Oct 11 04:52:53 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct 11 04:52:53 compute-0 ceph-mon[74243]: osdmap e134: 3 total, 3 up, 3 in
Oct 11 04:52:53 compute-0 ceph-mon[74243]: pgmap v933: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 65 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 194 KiB/s rd, 18 MiB/s wr, 267 op/s
Oct 11 04:52:53 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct 11 04:52:53 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct 11 04:52:53 compute-0 podman[266875]: 2025-10-11 04:52:53.45395312 +0000 UTC m=+0.087050634 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS)
Oct 11 04:52:53 compute-0 podman[266876]: 2025-10-11 04:52:53.455638862 +0000 UTC m=+0.094379317 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 04:52:53 compute-0 podman[266874]: 2025-10-11 04:52:53.486181938 +0000 UTC m=+0.134501053 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:52:54 compute-0 ceph-mon[74243]: osdmap e135: 3 total, 3 up, 3 in
Oct 11 04:52:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 279 KiB/s rd, 19 MiB/s wr, 381 op/s
Oct 11 04:52:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:52:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct 11 04:52:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct 11 04:52:55 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct 11 04:52:55 compute-0 ceph-mon[74243]: pgmap v935: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 279 KiB/s rd, 19 MiB/s wr, 381 op/s
Oct 11 04:52:55 compute-0 ceph-mon[74243]: osdmap e136: 3 total, 3 up, 3 in
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:52:56
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['default.rgw.control', 'vms', '.mgr', 'default.rgw.meta', '.rgw.root', 'images', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log']
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:52:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct 11 04:52:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct 11 04:52:56 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct 11 04:52:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 41 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 3.5 MiB/s wr, 180 op/s
Oct 11 04:52:57 compute-0 ceph-mon[74243]: osdmap e137: 3 total, 3 up, 3 in
Oct 11 04:52:57 compute-0 ceph-mon[74243]: pgmap v938: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 41 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 3.5 MiB/s wr, 180 op/s
Oct 11 04:52:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 2.7 MiB/s wr, 193 op/s
Oct 11 04:52:58 compute-0 ceph-mon[74243]: pgmap v939: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 2.7 MiB/s wr, 193 op/s
Oct 11 04:52:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct 11 04:52:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct 11 04:52:59 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct 11 04:53:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct 11 04:53:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct 11 04:53:00 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct 11 04:53:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 6.8 KiB/s wr, 83 op/s
Oct 11 04:53:00 compute-0 ceph-mon[74243]: osdmap e138: 3 total, 3 up, 3 in
Oct 11 04:53:00 compute-0 ceph-mon[74243]: osdmap e139: 3 total, 3 up, 3 in
Oct 11 04:53:00 compute-0 ceph-mon[74243]: pgmap v942: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 6.8 KiB/s wr, 83 op/s
Oct 11 04:53:01 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct 11 04:53:01 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct 11 04:53:01 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct 11 04:53:02 compute-0 ceph-mon[74243]: osdmap e140: 3 total, 3 up, 3 in
Oct 11 04:53:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct 11 04:53:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct 11 04:53:02 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct 11 04:53:02 compute-0 sudo[266937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:53:02 compute-0 sudo[266937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:53:02 compute-0 sudo[266937]: pam_unix(sudo:session): session closed for user root
Oct 11 04:53:02 compute-0 sudo[266962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:53:02 compute-0 sudo[266962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:53:02 compute-0 sudo[266962]: pam_unix(sudo:session): session closed for user root
Oct 11 04:53:02 compute-0 sudo[266987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:53:02 compute-0 sudo[266987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:53:02 compute-0 sudo[266987]: pam_unix(sudo:session): session closed for user root
Oct 11 04:53:02 compute-0 sudo[267012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:53:02 compute-0 sudo[267012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:53:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 4.2 KiB/s wr, 26 op/s
Oct 11 04:53:02 compute-0 sudo[267012]: pam_unix(sudo:session): session closed for user root
Oct 11 04:53:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:53:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:53:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:53:02 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:53:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:53:02 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:53:02 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev dce70cc7-f7be-4b83-81d8-8817e982ed6b does not exist
Oct 11 04:53:02 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev ae741075-69fd-485a-bc74-dc7a03be1897 does not exist
Oct 11 04:53:02 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev cdee7158-e241-4cca-b747-a644be6c1a4a does not exist
Oct 11 04:53:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:53:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:53:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:53:02 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:53:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:53:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:53:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:53:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/610833813' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:53:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:53:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/610833813' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:53:03 compute-0 sudo[267070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:53:03 compute-0 sudo[267070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:53:03 compute-0 sudo[267070]: pam_unix(sudo:session): session closed for user root
Oct 11 04:53:03 compute-0 podman[267094]: 2025-10-11 04:53:03.121607679 +0000 UTC m=+0.089979577 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:53:03 compute-0 sudo[267101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:53:03 compute-0 sudo[267101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:53:03 compute-0 sudo[267101]: pam_unix(sudo:session): session closed for user root
Oct 11 04:53:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct 11 04:53:03 compute-0 ceph-mon[74243]: osdmap e141: 3 total, 3 up, 3 in
Oct 11 04:53:03 compute-0 ceph-mon[74243]: pgmap v945: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 4.2 KiB/s wr, 26 op/s
Oct 11 04:53:03 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:53:03 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:53:03 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:53:03 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:53:03 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:53:03 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:53:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/610833813' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:53:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/610833813' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:53:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct 11 04:53:03 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct 11 04:53:03 compute-0 sudo[267137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:53:03 compute-0 sudo[267137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:53:03 compute-0 sudo[267137]: pam_unix(sudo:session): session closed for user root
Oct 11 04:53:03 compute-0 sudo[267163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:53:03 compute-0 sudo[267163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:53:03 compute-0 podman[267228]: 2025-10-11 04:53:03.685565508 +0000 UTC m=+0.056439756 container create 9331a2582f45e61897c9bd28b4c89f7e84335d4676503a810c3d862ecce634b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dhawan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:53:03 compute-0 systemd[1]: Started libpod-conmon-9331a2582f45e61897c9bd28b4c89f7e84335d4676503a810c3d862ecce634b8.scope.
Oct 11 04:53:03 compute-0 podman[267228]: 2025-10-11 04:53:03.655175687 +0000 UTC m=+0.026049965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:53:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:53:03 compute-0 podman[267228]: 2025-10-11 04:53:03.777493383 +0000 UTC m=+0.148367631 container init 9331a2582f45e61897c9bd28b4c89f7e84335d4676503a810c3d862ecce634b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dhawan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:53:03 compute-0 podman[267228]: 2025-10-11 04:53:03.789415502 +0000 UTC m=+0.160289730 container start 9331a2582f45e61897c9bd28b4c89f7e84335d4676503a810c3d862ecce634b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dhawan, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:53:03 compute-0 podman[267228]: 2025-10-11 04:53:03.792899689 +0000 UTC m=+0.163773937 container attach 9331a2582f45e61897c9bd28b4c89f7e84335d4676503a810c3d862ecce634b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:53:03 compute-0 systemd[1]: libpod-9331a2582f45e61897c9bd28b4c89f7e84335d4676503a810c3d862ecce634b8.scope: Deactivated successfully.
Oct 11 04:53:03 compute-0 eager_dhawan[267245]: 167 167
Oct 11 04:53:03 compute-0 conmon[267245]: conmon 9331a2582f45e61897c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9331a2582f45e61897c9bd28b4c89f7e84335d4676503a810c3d862ecce634b8.scope/container/memory.events
Oct 11 04:53:03 compute-0 podman[267250]: 2025-10-11 04:53:03.854955635 +0000 UTC m=+0.037662085 container died 9331a2582f45e61897c9bd28b4c89f7e84335d4676503a810c3d862ecce634b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:53:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-67d318f9d9d5e3477d9765e6b1751a8606653f891a12c8e9a2be258aac3ed426-merged.mount: Deactivated successfully.
Oct 11 04:53:03 compute-0 podman[267250]: 2025-10-11 04:53:03.899876851 +0000 UTC m=+0.082583261 container remove 9331a2582f45e61897c9bd28b4c89f7e84335d4676503a810c3d862ecce634b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 04:53:03 compute-0 systemd[1]: libpod-conmon-9331a2582f45e61897c9bd28b4c89f7e84335d4676503a810c3d862ecce634b8.scope: Deactivated successfully.
Oct 11 04:53:04 compute-0 podman[267272]: 2025-10-11 04:53:04.06894723 +0000 UTC m=+0.036812324 container create 9c1d3c1c61a68c5ede23e63f5fbc091a40fb766d79955b742ec392a29610c73b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_saha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 11 04:53:04 compute-0 systemd[1]: Started libpod-conmon-9c1d3c1c61a68c5ede23e63f5fbc091a40fb766d79955b742ec392a29610c73b.scope.
Oct 11 04:53:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7149f9be5d41db13da5dc80493a6922fbad880f06a9d0669f34a52ab5a1cb478/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7149f9be5d41db13da5dc80493a6922fbad880f06a9d0669f34a52ab5a1cb478/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7149f9be5d41db13da5dc80493a6922fbad880f06a9d0669f34a52ab5a1cb478/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7149f9be5d41db13da5dc80493a6922fbad880f06a9d0669f34a52ab5a1cb478/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:04 compute-0 ceph-mon[74243]: osdmap e142: 3 total, 3 up, 3 in
Oct 11 04:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7149f9be5d41db13da5dc80493a6922fbad880f06a9d0669f34a52ab5a1cb478/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:04 compute-0 podman[267272]: 2025-10-11 04:53:04.051903273 +0000 UTC m=+0.019768387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:53:04 compute-0 podman[267272]: 2025-10-11 04:53:04.166500616 +0000 UTC m=+0.134365720 container init 9c1d3c1c61a68c5ede23e63f5fbc091a40fb766d79955b742ec392a29610c73b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:53:04 compute-0 podman[267272]: 2025-10-11 04:53:04.1738236 +0000 UTC m=+0.141688694 container start 9c1d3c1c61a68c5ede23e63f5fbc091a40fb766d79955b742ec392a29610c73b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_saha, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:53:04 compute-0 podman[267272]: 2025-10-11 04:53:04.177740618 +0000 UTC m=+0.145605732 container attach 9c1d3c1c61a68c5ede23e63f5fbc091a40fb766d79955b742ec392a29610c73b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_saha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:53:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 8.5 KiB/s wr, 119 op/s
Oct 11 04:53:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:05 compute-0 ceph-mon[74243]: pgmap v947: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 8.5 KiB/s wr, 119 op/s
Oct 11 04:53:05 compute-0 wonderful_saha[267289]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:53:05 compute-0 wonderful_saha[267289]: --> relative data size: 1.0
Oct 11 04:53:05 compute-0 wonderful_saha[267289]: --> All data devices are unavailable
Oct 11 04:53:05 compute-0 systemd[1]: libpod-9c1d3c1c61a68c5ede23e63f5fbc091a40fb766d79955b742ec392a29610c73b.scope: Deactivated successfully.
Oct 11 04:53:05 compute-0 podman[267272]: 2025-10-11 04:53:05.305198545 +0000 UTC m=+1.273063659 container died 9c1d3c1c61a68c5ede23e63f5fbc091a40fb766d79955b742ec392a29610c73b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_saha, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:53:05 compute-0 systemd[1]: libpod-9c1d3c1c61a68c5ede23e63f5fbc091a40fb766d79955b742ec392a29610c73b.scope: Consumed 1.073s CPU time.
Oct 11 04:53:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-7149f9be5d41db13da5dc80493a6922fbad880f06a9d0669f34a52ab5a1cb478-merged.mount: Deactivated successfully.
Oct 11 04:53:05 compute-0 podman[267272]: 2025-10-11 04:53:05.38157119 +0000 UTC m=+1.349436324 container remove 9c1d3c1c61a68c5ede23e63f5fbc091a40fb766d79955b742ec392a29610c73b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_saha, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:53:05 compute-0 systemd[1]: libpod-conmon-9c1d3c1c61a68c5ede23e63f5fbc091a40fb766d79955b742ec392a29610c73b.scope: Deactivated successfully.
Oct 11 04:53:05 compute-0 sudo[267163]: pam_unix(sudo:session): session closed for user root
Oct 11 04:53:05 compute-0 sudo[267332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:53:05 compute-0 sudo[267332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:53:05 compute-0 sudo[267332]: pam_unix(sudo:session): session closed for user root
Oct 11 04:53:05 compute-0 sudo[267357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:53:05 compute-0 sudo[267357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:53:05 compute-0 sudo[267357]: pam_unix(sudo:session): session closed for user root
Oct 11 04:53:05 compute-0 sudo[267382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:53:05 compute-0 sudo[267382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:53:05 compute-0 sudo[267382]: pam_unix(sudo:session): session closed for user root
Oct 11 04:53:05 compute-0 sudo[267407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:53:05 compute-0 sudo[267407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:53:06 compute-0 podman[267472]: 2025-10-11 04:53:06.225777254 +0000 UTC m=+0.037706226 container create 1f2b0672c0ccc6980e95617455f404c44364735dcf0dd3394b2cdf9e01a5f34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:53:06 compute-0 systemd[1]: Started libpod-conmon-1f2b0672c0ccc6980e95617455f404c44364735dcf0dd3394b2cdf9e01a5f34b.scope.
Oct 11 04:53:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:53:06 compute-0 podman[267472]: 2025-10-11 04:53:06.208558792 +0000 UTC m=+0.020487734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:53:06 compute-0 podman[267472]: 2025-10-11 04:53:06.321291129 +0000 UTC m=+0.133220111 container init 1f2b0672c0ccc6980e95617455f404c44364735dcf0dd3394b2cdf9e01a5f34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 04:53:06 compute-0 podman[267472]: 2025-10-11 04:53:06.332455158 +0000 UTC m=+0.144384130 container start 1f2b0672c0ccc6980e95617455f404c44364735dcf0dd3394b2cdf9e01a5f34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:53:06 compute-0 podman[267472]: 2025-10-11 04:53:06.336255814 +0000 UTC m=+0.148184786 container attach 1f2b0672c0ccc6980e95617455f404c44364735dcf0dd3394b2cdf9e01a5f34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:53:06 compute-0 nostalgic_nash[267488]: 167 167
Oct 11 04:53:06 compute-0 systemd[1]: libpod-1f2b0672c0ccc6980e95617455f404c44364735dcf0dd3394b2cdf9e01a5f34b.scope: Deactivated successfully.
Oct 11 04:53:06 compute-0 podman[267472]: 2025-10-11 04:53:06.340044229 +0000 UTC m=+0.151973201 container died 1f2b0672c0ccc6980e95617455f404c44364735dcf0dd3394b2cdf9e01a5f34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:53:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-7141d7ace256df73ed279961453f98a3fc3603af0dc2fff043c85becfda47ec9-merged.mount: Deactivated successfully.
Oct 11 04:53:06 compute-0 podman[267472]: 2025-10-11 04:53:06.388094163 +0000 UTC m=+0.200023135 container remove 1f2b0672c0ccc6980e95617455f404c44364735dcf0dd3394b2cdf9e01a5f34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:53:06 compute-0 systemd[1]: libpod-conmon-1f2b0672c0ccc6980e95617455f404c44364735dcf0dd3394b2cdf9e01a5f34b.scope: Deactivated successfully.
Oct 11 04:53:06 compute-0 podman[267513]: 2025-10-11 04:53:06.626951352 +0000 UTC m=+0.067191196 container create e6880dcedc6abefdd26c7f635de1dbbb507f3543d355fc1af4b90898a7b7a5eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_sammet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:53:06 compute-0 systemd[1]: Started libpod-conmon-e6880dcedc6abefdd26c7f635de1dbbb507f3543d355fc1af4b90898a7b7a5eb.scope.
Oct 11 04:53:06 compute-0 podman[267513]: 2025-10-11 04:53:06.597628147 +0000 UTC m=+0.037868041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:53:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:53:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9ec49cf668d720f623d911be8219338ca75101c040c623d9b1a1433d4df1e3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9ec49cf668d720f623d911be8219338ca75101c040c623d9b1a1433d4df1e3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9ec49cf668d720f623d911be8219338ca75101c040c623d9b1a1433d4df1e3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9ec49cf668d720f623d911be8219338ca75101c040c623d9b1a1433d4df1e3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:06 compute-0 podman[267513]: 2025-10-11 04:53:06.717234045 +0000 UTC m=+0.157473899 container init e6880dcedc6abefdd26c7f635de1dbbb507f3543d355fc1af4b90898a7b7a5eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_sammet, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:53:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 6.8 KiB/s wr, 94 op/s
Oct 11 04:53:06 compute-0 podman[267513]: 2025-10-11 04:53:06.730999601 +0000 UTC m=+0.171239445 container start e6880dcedc6abefdd26c7f635de1dbbb507f3543d355fc1af4b90898a7b7a5eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:53:06 compute-0 podman[267513]: 2025-10-11 04:53:06.735611146 +0000 UTC m=+0.175850990 container attach e6880dcedc6abefdd26c7f635de1dbbb507f3543d355fc1af4b90898a7b7a5eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_sammet, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:53:06 compute-0 ceph-mon[74243]: pgmap v948: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 6.8 KiB/s wr, 94 op/s
Oct 11 04:53:07 compute-0 priceless_sammet[267530]: {
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:     "0": [
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:         {
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "devices": [
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "/dev/loop3"
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             ],
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "lv_name": "ceph_lv0",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "lv_size": "21470642176",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "name": "ceph_lv0",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "tags": {
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.cluster_name": "ceph",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.crush_device_class": "",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.encrypted": "0",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.osd_id": "0",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.type": "block",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.vdo": "0"
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             },
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "type": "block",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "vg_name": "ceph_vg0"
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:         }
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:     ],
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:     "1": [
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:         {
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "devices": [
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "/dev/loop4"
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             ],
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "lv_name": "ceph_lv1",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "lv_size": "21470642176",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "name": "ceph_lv1",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "tags": {
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.cluster_name": "ceph",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.crush_device_class": "",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.encrypted": "0",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.osd_id": "1",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.type": "block",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.vdo": "0"
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             },
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "type": "block",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "vg_name": "ceph_vg1"
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:         }
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:     ],
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:     "2": [
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:         {
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "devices": [
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "/dev/loop5"
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             ],
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "lv_name": "ceph_lv2",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "lv_size": "21470642176",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "name": "ceph_lv2",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "tags": {
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.cluster_name": "ceph",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.crush_device_class": "",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.encrypted": "0",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.osd_id": "2",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.type": "block",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:                 "ceph.vdo": "0"
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             },
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "type": "block",
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:             "vg_name": "ceph_vg2"
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:         }
Oct 11 04:53:07 compute-0 priceless_sammet[267530]:     ]
Oct 11 04:53:07 compute-0 priceless_sammet[267530]: }
Oct 11 04:53:07 compute-0 systemd[1]: libpod-e6880dcedc6abefdd26c7f635de1dbbb507f3543d355fc1af4b90898a7b7a5eb.scope: Deactivated successfully.
Oct 11 04:53:07 compute-0 podman[267513]: 2025-10-11 04:53:07.530145366 +0000 UTC m=+0.970385210 container died e6880dcedc6abefdd26c7f635de1dbbb507f3543d355fc1af4b90898a7b7a5eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:53:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9ec49cf668d720f623d911be8219338ca75101c040c623d9b1a1433d4df1e3a-merged.mount: Deactivated successfully.
Oct 11 04:53:07 compute-0 podman[267513]: 2025-10-11 04:53:07.816243879 +0000 UTC m=+1.256483723 container remove e6880dcedc6abefdd26c7f635de1dbbb507f3543d355fc1af4b90898a7b7a5eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:53:07 compute-0 systemd[1]: libpod-conmon-e6880dcedc6abefdd26c7f635de1dbbb507f3543d355fc1af4b90898a7b7a5eb.scope: Deactivated successfully.
Oct 11 04:53:07 compute-0 sudo[267407]: pam_unix(sudo:session): session closed for user root
Oct 11 04:53:07 compute-0 sudo[267553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:53:07 compute-0 sudo[267553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:53:07 compute-0 sudo[267553]: pam_unix(sudo:session): session closed for user root
Oct 11 04:53:08 compute-0 nova_compute[259400]: 2025-10-11 04:53:08.013 2 DEBUG oslo_concurrency.lockutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Acquiring lock "cdb2b2db-db93-4afd-8188-bc2b971be88e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:53:08 compute-0 nova_compute[259400]: 2025-10-11 04:53:08.016 2 DEBUG oslo_concurrency.lockutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "cdb2b2db-db93-4afd-8188-bc2b971be88e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:53:08 compute-0 sudo[267578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:53:08 compute-0 sudo[267578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:53:08 compute-0 nova_compute[259400]: 2025-10-11 04:53:08.066 2 DEBUG nova.compute.manager [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:53:08 compute-0 sudo[267578]: pam_unix(sudo:session): session closed for user root
Oct 11 04:53:08 compute-0 sudo[267603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:53:08 compute-0 sudo[267603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:53:08 compute-0 sudo[267603]: pam_unix(sudo:session): session closed for user root
Oct 11 04:53:08 compute-0 nova_compute[259400]: 2025-10-11 04:53:08.177 2 DEBUG oslo_concurrency.lockutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:53:08 compute-0 nova_compute[259400]: 2025-10-11 04:53:08.178 2 DEBUG oslo_concurrency.lockutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:53:08 compute-0 nova_compute[259400]: 2025-10-11 04:53:08.189 2 DEBUG nova.virt.hardware [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:53:08 compute-0 nova_compute[259400]: 2025-10-11 04:53:08.189 2 INFO nova.compute.claims [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:53:08 compute-0 sudo[267628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:53:08 compute-0 sudo[267628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:53:08 compute-0 nova_compute[259400]: 2025-10-11 04:53:08.368 2 DEBUG oslo_concurrency.processutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:53:08 compute-0 podman[267709]: 2025-10-11 04:53:08.601296992 +0000 UTC m=+0.053386040 container create f50d6ede6d797b18ef04dc345bf9411655d37f52c502fa1de2b525f8498c34bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:53:08 compute-0 systemd[1]: Started libpod-conmon-f50d6ede6d797b18ef04dc345bf9411655d37f52c502fa1de2b525f8498c34bc.scope.
Oct 11 04:53:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:53:08 compute-0 podman[267709]: 2025-10-11 04:53:08.57651274 +0000 UTC m=+0.028601858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:53:08 compute-0 podman[267709]: 2025-10-11 04:53:08.679971374 +0000 UTC m=+0.132060452 container init f50d6ede6d797b18ef04dc345bf9411655d37f52c502fa1de2b525f8498c34bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wescoff, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:53:08 compute-0 podman[267709]: 2025-10-11 04:53:08.686347254 +0000 UTC m=+0.138436302 container start f50d6ede6d797b18ef04dc345bf9411655d37f52c502fa1de2b525f8498c34bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:53:08 compute-0 podman[267709]: 2025-10-11 04:53:08.689901273 +0000 UTC m=+0.141990371 container attach f50d6ede6d797b18ef04dc345bf9411655d37f52c502fa1de2b525f8498c34bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wescoff, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:53:08 compute-0 heuristic_wescoff[267725]: 167 167
Oct 11 04:53:08 compute-0 systemd[1]: libpod-f50d6ede6d797b18ef04dc345bf9411655d37f52c502fa1de2b525f8498c34bc.scope: Deactivated successfully.
Oct 11 04:53:08 compute-0 podman[267709]: 2025-10-11 04:53:08.691953294 +0000 UTC m=+0.144042372 container died f50d6ede6d797b18ef04dc345bf9411655d37f52c502fa1de2b525f8498c34bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:53:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 6.0 KiB/s wr, 85 op/s
Oct 11 04:53:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c165ec0c4396987c840ef33c995e10f9477a074e0739c48448f239627a91ea63-merged.mount: Deactivated successfully.
Oct 11 04:53:08 compute-0 podman[267709]: 2025-10-11 04:53:08.743030835 +0000 UTC m=+0.195119873 container remove f50d6ede6d797b18ef04dc345bf9411655d37f52c502fa1de2b525f8498c34bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wescoff, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:53:08 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:53:08 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3704566311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:08 compute-0 systemd[1]: libpod-conmon-f50d6ede6d797b18ef04dc345bf9411655d37f52c502fa1de2b525f8498c34bc.scope: Deactivated successfully.
Oct 11 04:53:08 compute-0 nova_compute[259400]: 2025-10-11 04:53:08.779 2 DEBUG oslo_concurrency.processutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:53:08 compute-0 nova_compute[259400]: 2025-10-11 04:53:08.787 2 DEBUG nova.compute.provider_tree [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Inventory has not changed in ProviderTree for provider: 1f05a244-23b6-4149-9b5a-a525e5860d18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:53:08 compute-0 ceph-mon[74243]: pgmap v949: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 6.0 KiB/s wr, 85 op/s
Oct 11 04:53:08 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3704566311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:08 compute-0 nova_compute[259400]: 2025-10-11 04:53:08.812 2 DEBUG nova.scheduler.client.report [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Inventory has not changed for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:53:08 compute-0 nova_compute[259400]: 2025-10-11 04:53:08.852 2 DEBUG oslo_concurrency.lockutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:53:08 compute-0 nova_compute[259400]: 2025-10-11 04:53:08.852 2 DEBUG nova.compute.manager [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:53:08 compute-0 nova_compute[259400]: 2025-10-11 04:53:08.943 2 DEBUG nova.compute.manager [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:53:08 compute-0 nova_compute[259400]: 2025-10-11 04:53:08.944 2 DEBUG nova.network.neutron [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:53:08 compute-0 podman[267750]: 2025-10-11 04:53:08.982357145 +0000 UTC m=+0.059692727 container create f1eda5d2138bd50693a4de6f6ce2d433280096f89796ed072800a84ed1eefc91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_villani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:53:08 compute-0 nova_compute[259400]: 2025-10-11 04:53:08.986 2 INFO nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:53:09 compute-0 nova_compute[259400]: 2025-10-11 04:53:09.011 2 DEBUG nova.compute.manager [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:53:09 compute-0 systemd[1]: Started libpod-conmon-f1eda5d2138bd50693a4de6f6ce2d433280096f89796ed072800a84ed1eefc91.scope.
Oct 11 04:53:09 compute-0 podman[267750]: 2025-10-11 04:53:08.959510072 +0000 UTC m=+0.036845654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:53:09 compute-0 nova_compute[259400]: 2025-10-11 04:53:09.059 2 INFO nova.virt.block_device [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Booting with volume 20e340cd-bd3f-4374-a8aa-31d1836505e7 at /dev/vda
Oct 11 04:53:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1cd8f0b6c2e7ff729f98cc09c02883843653dfd7566d7c726b6126c23f0cdc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1cd8f0b6c2e7ff729f98cc09c02883843653dfd7566d7c726b6126c23f0cdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1cd8f0b6c2e7ff729f98cc09c02883843653dfd7566d7c726b6126c23f0cdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1cd8f0b6c2e7ff729f98cc09c02883843653dfd7566d7c726b6126c23f0cdc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:09 compute-0 podman[267750]: 2025-10-11 04:53:09.095979904 +0000 UTC m=+0.173315526 container init f1eda5d2138bd50693a4de6f6ce2d433280096f89796ed072800a84ed1eefc91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_villani, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:53:09 compute-0 podman[267750]: 2025-10-11 04:53:09.108948769 +0000 UTC m=+0.186284341 container start f1eda5d2138bd50693a4de6f6ce2d433280096f89796ed072800a84ed1eefc91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:53:09 compute-0 podman[267750]: 2025-10-11 04:53:09.122355305 +0000 UTC m=+0.199690997 container attach f1eda5d2138bd50693a4de6f6ce2d433280096f89796ed072800a84ed1eefc91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:53:09 compute-0 nova_compute[259400]: 2025-10-11 04:53:09.523 2 DEBUG os_brick.utils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:53:09 compute-0 nova_compute[259400]: 2025-10-11 04:53:09.526 2 INFO oslo.privsep.daemon [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmp17b610ba/privsep.sock']
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.025 2 DEBUG nova.network.neutron [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.027 2 DEBUG nova.compute.manager [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:53:10 compute-0 brave_villani[267766]: {
Oct 11 04:53:10 compute-0 brave_villani[267766]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:53:10 compute-0 brave_villani[267766]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:53:10 compute-0 brave_villani[267766]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:53:10 compute-0 brave_villani[267766]:         "osd_id": 1,
Oct 11 04:53:10 compute-0 brave_villani[267766]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:53:10 compute-0 brave_villani[267766]:         "type": "bluestore"
Oct 11 04:53:10 compute-0 brave_villani[267766]:     },
Oct 11 04:53:10 compute-0 brave_villani[267766]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:53:10 compute-0 brave_villani[267766]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:53:10 compute-0 brave_villani[267766]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:53:10 compute-0 brave_villani[267766]:         "osd_id": 0,
Oct 11 04:53:10 compute-0 brave_villani[267766]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:53:10 compute-0 brave_villani[267766]:         "type": "bluestore"
Oct 11 04:53:10 compute-0 brave_villani[267766]:     },
Oct 11 04:53:10 compute-0 brave_villani[267766]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:53:10 compute-0 brave_villani[267766]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:53:10 compute-0 brave_villani[267766]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:53:10 compute-0 brave_villani[267766]:         "osd_id": 2,
Oct 11 04:53:10 compute-0 brave_villani[267766]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:53:10 compute-0 brave_villani[267766]:         "type": "bluestore"
Oct 11 04:53:10 compute-0 brave_villani[267766]:     }
Oct 11 04:53:10 compute-0 brave_villani[267766]: }
Oct 11 04:53:10 compute-0 systemd[1]: libpod-f1eda5d2138bd50693a4de6f6ce2d433280096f89796ed072800a84ed1eefc91.scope: Deactivated successfully.
Oct 11 04:53:10 compute-0 podman[267750]: 2025-10-11 04:53:10.064391212 +0000 UTC m=+1.141726754 container died f1eda5d2138bd50693a4de6f6ce2d433280096f89796ed072800a84ed1eefc91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_villani, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:53:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc1cd8f0b6c2e7ff729f98cc09c02883843653dfd7566d7c726b6126c23f0cdc-merged.mount: Deactivated successfully.
Oct 11 04:53:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct 11 04:53:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct 11 04:53:10 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct 11 04:53:10 compute-0 podman[267750]: 2025-10-11 04:53:10.127157546 +0000 UTC m=+1.204493108 container remove f1eda5d2138bd50693a4de6f6ce2d433280096f89796ed072800a84ed1eefc91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_villani, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:53:10 compute-0 systemd[1]: libpod-conmon-f1eda5d2138bd50693a4de6f6ce2d433280096f89796ed072800a84ed1eefc91.scope: Deactivated successfully.
Oct 11 04:53:10 compute-0 sudo[267628]: pam_unix(sudo:session): session closed for user root
Oct 11 04:53:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:53:10 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:53:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:53:10 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:53:10 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 588811f3-24f3-48c4-afda-0d892358d21a does not exist
Oct 11 04:53:10 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev ea0a7a26-fa8b-4f91-bd3b-7333685062fc does not exist
Oct 11 04:53:10 compute-0 sudo[267816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:53:10 compute-0 sudo[267816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:53:10 compute-0 sudo[267816]: pam_unix(sudo:session): session closed for user root
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.275 2 INFO oslo.privsep.daemon [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Spawned new privsep daemon via rootwrap
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.142 370 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.145 370 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.147 370 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.147 370 INFO oslo.privsep.daemon [-] privsep daemon running as pid 370
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.280 370 DEBUG oslo.privsep.daemon [-] privsep: reply[f0ca4634-cd9b-426c-94d2-15e35fb58925]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:53:10 compute-0 sudo[267841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:53:10 compute-0 sudo[267841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:53:10 compute-0 sudo[267841]: pam_unix(sudo:session): session closed for user root
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.384 370 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.403 370 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.403 370 DEBUG oslo.privsep.daemon [-] privsep: reply[1acf33f8-c5c7-420f-b7b9-940f4ffd8266]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.405 370 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.416 370 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.416 370 DEBUG oslo.privsep.daemon [-] privsep: reply[6506ddca-2864-4a1a-8c26-fa4627ba7080]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e1bf01d7e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.421 370 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.435 370 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.436 370 DEBUG oslo.privsep.daemon [-] privsep: reply[d883dbf5-e1db-45c3-9a2e-336b784f561e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.438 370 DEBUG oslo.privsep.daemon [-] privsep: reply[c7fe48c6-72c6-422b-b5f6-2bf428b0ba1f]: (4, '53cb9e9d-2668-4473-9499-ec86a0f02be2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.438 2 DEBUG oslo_concurrency.processutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.462 2 DEBUG oslo_concurrency.processutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.466 2 DEBUG os_brick.initiator.connectors.lightos [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.467 2 DEBUG os_brick.initiator.connectors.lightos [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.467 2 DEBUG os_brick.initiator.connectors.lightos [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.468 2 DEBUG os_brick.utils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] <== get_connector_properties: return (943ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e1bf01d7e', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': '53cb9e9d-2668-4473-9499-ec86a0f02be2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:53:10 compute-0 nova_compute[259400]: 2025-10-11 04:53:10.468 2 DEBUG nova.virt.block_device [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Updating existing volume attachment record: 26022d48-9c8d-4172-ad15-9723fa5c4309 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:53:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.6 KiB/s wr, 67 op/s
Oct 11 04:53:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:53:11.017 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:53:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:53:11.018 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:53:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:53:11.018 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:53:11 compute-0 ceph-mon[74243]: osdmap e143: 3 total, 3 up, 3 in
Oct 11 04:53:11 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:53:11 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:53:11 compute-0 ceph-mon[74243]: pgmap v951: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.6 KiB/s wr, 67 op/s
Oct 11 04:53:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:11 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2365451432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.881 2 DEBUG nova.compute.manager [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.884 2 DEBUG nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.885 2 INFO nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Creating image(s)
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.886 2 DEBUG nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.886 2 DEBUG nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Ensure instance console log exists: /var/lib/nova/instances/cdb2b2db-db93-4afd-8188-bc2b971be88e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.887 2 DEBUG oslo_concurrency.lockutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.888 2 DEBUG oslo_concurrency.lockutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.888 2 DEBUG oslo_concurrency.lockutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.892 2 DEBUG nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'boot_index': 0, 'delete_on_termination': True, 'attachment_id': '26022d48-9c8d-4172-ad15-9723fa5c4309', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-20e340cd-bd3f-4374-a8aa-31d1836505e7', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '20e340cd-bd3f-4374-a8aa-31d1836505e7', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'cdb2b2db-db93-4afd-8188-bc2b971be88e', 'attached_at': '', 'detached_at': '', 'volume_id': '20e340cd-bd3f-4374-a8aa-31d1836505e7', 'serial': '20e340cd-bd3f-4374-a8aa-31d1836505e7'}, 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.901 2 WARNING nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.911 2 DEBUG nova.virt.libvirt.host [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.912 2 DEBUG nova.virt.libvirt.host [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.917 2 DEBUG nova.virt.libvirt.host [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.918 2 DEBUG nova.virt.libvirt.host [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.919 2 DEBUG nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.920 2 DEBUG nova.virt.hardware [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:51:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='18296d19-1d30-423e-a079-2f3be2925b06',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.921 2 DEBUG nova.virt.hardware [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.921 2 DEBUG nova.virt.hardware [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.922 2 DEBUG nova.virt.hardware [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.922 2 DEBUG nova.virt.hardware [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.922 2 DEBUG nova.virt.hardware [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.923 2 DEBUG nova.virt.hardware [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.923 2 DEBUG nova.virt.hardware [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.924 2 DEBUG nova.virt.hardware [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.924 2 DEBUG nova.virt.hardware [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.925 2 DEBUG nova.virt.hardware [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.966 2 DEBUG nova.storage.rbd_utils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] rbd image cdb2b2db-db93-4afd-8188-bc2b971be88e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.972 2 DEBUG nova.privsep.utils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 11 04:53:11 compute-0 nova_compute[259400]: 2025-10-11 04:53:11.973 2 DEBUG oslo_concurrency.processutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:53:12 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/2365451432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:12 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2829764136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:12 compute-0 nova_compute[259400]: 2025-10-11 04:53:12.463 2 DEBUG oslo_concurrency.processutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:53:12 compute-0 nova_compute[259400]: 2025-10-11 04:53:12.465 2 DEBUG oslo_concurrency.lockutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:53:12 compute-0 nova_compute[259400]: 2025-10-11 04:53:12.466 2 DEBUG oslo_concurrency.lockutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:53:12 compute-0 nova_compute[259400]: 2025-10-11 04:53:12.468 2 DEBUG oslo_concurrency.lockutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:53:12 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 11 04:53:12 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 11 04:53:12 compute-0 nova_compute[259400]: 2025-10-11 04:53:12.573 2 DEBUG nova.objects.instance [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lazy-loading 'pci_devices' on Instance uuid cdb2b2db-db93-4afd-8188-bc2b971be88e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:53:12 compute-0 nova_compute[259400]: 2025-10-11 04:53:12.598 2 DEBUG nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:53:12 compute-0 nova_compute[259400]:   <uuid>cdb2b2db-db93-4afd-8188-bc2b971be88e</uuid>
Oct 11 04:53:12 compute-0 nova_compute[259400]:   <name>instance-00000001</name>
Oct 11 04:53:12 compute-0 nova_compute[259400]:   <memory>131072</memory>
Oct 11 04:53:12 compute-0 nova_compute[259400]:   <vcpu>1</vcpu>
Oct 11 04:53:12 compute-0 nova_compute[259400]:   <metadata>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <nova:name>instance-depend-image</nova:name>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <nova:creationTime>2025-10-11 04:53:11</nova:creationTime>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <nova:flavor name="m1.nano">
Oct 11 04:53:12 compute-0 nova_compute[259400]:         <nova:memory>128</nova:memory>
Oct 11 04:53:12 compute-0 nova_compute[259400]:         <nova:disk>1</nova:disk>
Oct 11 04:53:12 compute-0 nova_compute[259400]:         <nova:swap>0</nova:swap>
Oct 11 04:53:12 compute-0 nova_compute[259400]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:53:12 compute-0 nova_compute[259400]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       </nova:flavor>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <nova:owner>
Oct 11 04:53:12 compute-0 nova_compute[259400]:         <nova:user uuid="2b58bbc4ab1b4516a6e6c539429b798d">tempest-ImageDependencyTests-427344473-project-member</nova:user>
Oct 11 04:53:12 compute-0 nova_compute[259400]:         <nova:project uuid="c3b70031321444a586c18d0460667752">tempest-ImageDependencyTests-427344473</nova:project>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       </nova:owner>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <nova:ports/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     </nova:instance>
Oct 11 04:53:12 compute-0 nova_compute[259400]:   </metadata>
Oct 11 04:53:12 compute-0 nova_compute[259400]:   <sysinfo type="smbios">
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <system>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <entry name="serial">cdb2b2db-db93-4afd-8188-bc2b971be88e</entry>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <entry name="uuid">cdb2b2db-db93-4afd-8188-bc2b971be88e</entry>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     </system>
Oct 11 04:53:12 compute-0 nova_compute[259400]:   </sysinfo>
Oct 11 04:53:12 compute-0 nova_compute[259400]:   <os>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <boot dev="hd"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <smbios mode="sysinfo"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:   </os>
Oct 11 04:53:12 compute-0 nova_compute[259400]:   <features>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <acpi/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <apic/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <vmcoreinfo/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:   </features>
Oct 11 04:53:12 compute-0 nova_compute[259400]:   <clock offset="utc">
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <timer name="hpet" present="no"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:   </clock>
Oct 11 04:53:12 compute-0 nova_compute[259400]:   <cpu mode="host-model" match="exact">
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:   </cpu>
Oct 11 04:53:12 compute-0 nova_compute[259400]:   <devices>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <disk type="network" device="cdrom">
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <driver type="raw" cache="none"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <source protocol="rbd" name="vms/cdb2b2db-db93-4afd-8188-bc2b971be88e_disk.config">
Oct 11 04:53:12 compute-0 nova_compute[259400]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       </source>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <auth username="openstack">
Oct 11 04:53:12 compute-0 nova_compute[259400]:         <secret type="ceph" uuid="166d0489-2ae7-59eb-961c-c1b5cda4b45a"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       </auth>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <target dev="sda" bus="sata"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     </disk>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <disk type="network" device="disk">
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <source protocol="rbd" name="volumes/volume-20e340cd-bd3f-4374-a8aa-31d1836505e7">
Oct 11 04:53:12 compute-0 nova_compute[259400]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       </source>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <auth username="openstack">
Oct 11 04:53:12 compute-0 nova_compute[259400]:         <secret type="ceph" uuid="166d0489-2ae7-59eb-961c-c1b5cda4b45a"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       </auth>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <target dev="vda" bus="virtio"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <serial>20e340cd-bd3f-4374-a8aa-31d1836505e7</serial>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     </disk>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <serial type="pty">
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <log file="/var/lib/nova/instances/cdb2b2db-db93-4afd-8188-bc2b971be88e/console.log" append="off"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     </serial>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <video>
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <model type="virtio"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     </video>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <input type="tablet" bus="usb"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <rng model="virtio">
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     </rng>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <controller type="usb" index="0"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     <memballoon model="virtio">
Oct 11 04:53:12 compute-0 nova_compute[259400]:       <stats period="10"/>
Oct 11 04:53:12 compute-0 nova_compute[259400]:     </memballoon>
Oct 11 04:53:12 compute-0 nova_compute[259400]:   </devices>
Oct 11 04:53:12 compute-0 nova_compute[259400]: </domain>
Oct 11 04:53:12 compute-0 nova_compute[259400]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:53:12 compute-0 nova_compute[259400]: 2025-10-11 04:53:12.673 2 DEBUG nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:53:12 compute-0 nova_compute[259400]: 2025-10-11 04:53:12.673 2 DEBUG nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:53:12 compute-0 nova_compute[259400]: 2025-10-11 04:53:12.674 2 INFO nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Using config drive
Oct 11 04:53:12 compute-0 nova_compute[259400]: 2025-10-11 04:53:12.696 2 DEBUG nova.storage.rbd_utils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] rbd image cdb2b2db-db93-4afd-8188-bc2b971be88e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:53:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.0 KiB/s wr, 56 op/s
Oct 11 04:53:13 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2829764136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:13 compute-0 ceph-mon[74243]: pgmap v952: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.0 KiB/s wr, 56 op/s
Oct 11 04:53:13 compute-0 nova_compute[259400]: 2025-10-11 04:53:13.257 2 INFO nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Creating config drive at /var/lib/nova/instances/cdb2b2db-db93-4afd-8188-bc2b971be88e/disk.config
Oct 11 04:53:13 compute-0 nova_compute[259400]: 2025-10-11 04:53:13.266 2 DEBUG oslo_concurrency.processutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cdb2b2db-db93-4afd-8188-bc2b971be88e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6x0ef0oz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:53:13 compute-0 nova_compute[259400]: 2025-10-11 04:53:13.422 2 DEBUG oslo_concurrency.processutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cdb2b2db-db93-4afd-8188-bc2b971be88e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6x0ef0oz" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:53:13 compute-0 nova_compute[259400]: 2025-10-11 04:53:13.464 2 DEBUG nova.storage.rbd_utils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] rbd image cdb2b2db-db93-4afd-8188-bc2b971be88e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:53:13 compute-0 nova_compute[259400]: 2025-10-11 04:53:13.471 2 DEBUG oslo_concurrency.processutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cdb2b2db-db93-4afd-8188-bc2b971be88e/disk.config cdb2b2db-db93-4afd-8188-bc2b971be88e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:53:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct 11 04:53:14 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct 11 04:53:14 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct 11 04:53:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 639 B/s wr, 10 op/s
Oct 11 04:53:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct 11 04:53:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct 11 04:53:15 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct 11 04:53:15 compute-0 ceph-mon[74243]: osdmap e144: 3 total, 3 up, 3 in
Oct 11 04:53:15 compute-0 ceph-mon[74243]: pgmap v954: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 639 B/s wr, 10 op/s
Oct 11 04:53:15 compute-0 nova_compute[259400]: 2025-10-11 04:53:15.197 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:53:15 compute-0 nova_compute[259400]: 2025-10-11 04:53:15.198 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:53:15 compute-0 nova_compute[259400]: 2025-10-11 04:53:15.198 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:53:15 compute-0 nova_compute[259400]: 2025-10-11 04:53:15.231 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 04:53:15 compute-0 nova_compute[259400]: 2025-10-11 04:53:15.231 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:53:15 compute-0 nova_compute[259400]: 2025-10-11 04:53:15.286 2 DEBUG oslo_concurrency.processutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cdb2b2db-db93-4afd-8188-bc2b971be88e/disk.config cdb2b2db-db93-4afd-8188-bc2b971be88e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.814s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:53:15 compute-0 nova_compute[259400]: 2025-10-11 04:53:15.286 2 INFO nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Deleting local config drive /var/lib/nova/instances/cdb2b2db-db93-4afd-8188-bc2b971be88e/disk.config because it was imported into RBD.
Oct 11 04:53:15 compute-0 systemd-machined[214394]: New machine qemu-1-instance-00000001.
Oct 11 04:53:15 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct 11 04:53:16 compute-0 ceph-mon[74243]: osdmap e145: 3 total, 3 up, 3 in
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.245 2 DEBUG nova.virt.driver [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Emitting event <LifecycleEvent: 1760158396.2447352, cdb2b2db-db93-4afd-8188-bc2b971be88e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.245 2 INFO nova.compute.manager [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] VM Resumed (Lifecycle Event)
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.248 2 DEBUG nova.compute.manager [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.249 2 DEBUG nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.253 2 INFO nova.virt.libvirt.driver [-] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Instance spawned successfully.
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.254 2 DEBUG nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.298 2 DEBUG nova.compute.manager [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.302 2 DEBUG nova.compute.manager [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.323 2 DEBUG nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.324 2 DEBUG nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.324 2 DEBUG nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.325 2 DEBUG nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.326 2 DEBUG nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.327 2 DEBUG nova.virt.libvirt.driver [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.332 2 INFO nova.compute.manager [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.332 2 DEBUG nova.virt.driver [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Emitting event <LifecycleEvent: 1760158396.2470987, cdb2b2db-db93-4afd-8188-bc2b971be88e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.333 2 INFO nova.compute.manager [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] VM Started (Lifecycle Event)
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.400 2 INFO nova.compute.manager [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Took 4.52 seconds to spawn the instance on the hypervisor.
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.401 2 DEBUG nova.compute.manager [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.411 2 DEBUG nova.compute.manager [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.416 2 DEBUG nova.compute.manager [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.437 2 INFO nova.compute.manager [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.477 2 INFO nova.compute.manager [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Took 8.34 seconds to build instance.
Oct 11 04:53:16 compute-0 nova_compute[259400]: 2025-10-11 04:53:16.503 2 DEBUG oslo_concurrency.lockutils [None req-5d55a53c-3610-4abf-a77d-c996a69cf074 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "cdb2b2db-db93-4afd-8188-bc2b971be88e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:53:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 619 B/s wr, 2 op/s
Oct 11 04:53:17 compute-0 ceph-mon[74243]: pgmap v956: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 619 B/s wr, 2 op/s
Oct 11 04:53:17 compute-0 nova_compute[259400]: 2025-10-11 04:53:17.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:53:17 compute-0 nova_compute[259400]: 2025-10-11 04:53:17.197 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:53:17 compute-0 nova_compute[259400]: 2025-10-11 04:53:17.197 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:53:17 compute-0 nova_compute[259400]: 2025-10-11 04:53:17.198 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:53:17 compute-0 nova_compute[259400]: 2025-10-11 04:53:17.198 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:53:17 compute-0 nova_compute[259400]: 2025-10-11 04:53:17.224 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:53:17 compute-0 nova_compute[259400]: 2025-10-11 04:53:17.225 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:53:17 compute-0 nova_compute[259400]: 2025-10-11 04:53:17.226 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:53:17 compute-0 nova_compute[259400]: 2025-10-11 04:53:17.226 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:53:17 compute-0 nova_compute[259400]: 2025-10-11 04:53:17.227 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:53:17 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:53:17 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/15232665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:17 compute-0 nova_compute[259400]: 2025-10-11 04:53:17.693 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:53:17 compute-0 nova_compute[259400]: 2025-10-11 04:53:17.777 2 DEBUG nova.virt.libvirt.driver [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:53:17 compute-0 nova_compute[259400]: 2025-10-11 04:53:17.777 2 DEBUG nova.virt.libvirt.driver [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:53:17 compute-0 nova_compute[259400]: 2025-10-11 04:53:17.916 2 WARNING nova.virt.libvirt.driver [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:53:17 compute-0 nova_compute[259400]: 2025-10-11 04:53:17.918 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5027MB free_disk=59.988277435302734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:53:17 compute-0 nova_compute[259400]: 2025-10-11 04:53:17.918 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:53:17 compute-0 nova_compute[259400]: 2025-10-11 04:53:17.918 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:53:17 compute-0 nova_compute[259400]: 2025-10-11 04:53:17.985 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Instance cdb2b2db-db93-4afd-8188-bc2b971be88e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:53:17 compute-0 nova_compute[259400]: 2025-10-11 04:53:17.986 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:53:17 compute-0 nova_compute[259400]: 2025-10-11 04:53:17.986 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:53:18 compute-0 nova_compute[259400]: 2025-10-11 04:53:18.022 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:53:18 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/15232665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:18 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:53:18 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/222941772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:18 compute-0 nova_compute[259400]: 2025-10-11 04:53:18.482 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:53:18 compute-0 nova_compute[259400]: 2025-10-11 04:53:18.488 2 DEBUG nova.compute.provider_tree [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f05a244-23b6-4149-9b5a-a525e5860d18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:53:18 compute-0 nova_compute[259400]: 2025-10-11 04:53:18.506 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:53:18 compute-0 nova_compute[259400]: 2025-10-11 04:53:18.533 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:53:18 compute-0 nova_compute[259400]: 2025-10-11 04:53:18.534 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:53:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 19 KiB/s wr, 19 op/s
Oct 11 04:53:19 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/222941772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:19 compute-0 ceph-mon[74243]: pgmap v957: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 19 KiB/s wr, 19 op/s
Oct 11 04:53:19 compute-0 nova_compute[259400]: 2025-10-11 04:53:19.534 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:53:19 compute-0 nova_compute[259400]: 2025-10-11 04:53:19.534 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:53:19 compute-0 nova_compute[259400]: 2025-10-11 04:53:19.535 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:53:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Oct 11 04:53:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Oct 11 04:53:20 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Oct 11 04:53:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 23 KiB/s wr, 23 op/s
Oct 11 04:53:21 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Oct 11 04:53:21 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Oct 11 04:53:21 compute-0 ceph-mon[74243]: osdmap e146: 3 total, 3 up, 3 in
Oct 11 04:53:21 compute-0 ceph-mon[74243]: pgmap v959: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 23 KiB/s wr, 23 op/s
Oct 11 04:53:21 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Oct 11 04:53:22 compute-0 ceph-mon[74243]: osdmap e147: 3 total, 3 up, 3 in
Oct 11 04:53:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 21 KiB/s wr, 32 op/s
Oct 11 04:53:23 compute-0 ceph-mon[74243]: pgmap v961: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 21 KiB/s wr, 32 op/s
Oct 11 04:53:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Oct 11 04:53:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Oct 11 04:53:24 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Oct 11 04:53:24 compute-0 podman[268092]: 2025-10-11 04:53:24.439030313 +0000 UTC m=+0.082315925 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 04:53:24 compute-0 podman[268093]: 2025-10-11 04:53:24.451889385 +0000 UTC m=+0.085018383 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd)
Oct 11 04:53:24 compute-0 podman[268091]: 2025-10-11 04:53:24.46845447 +0000 UTC m=+0.110345657 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:53:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 3.3 KiB/s wr, 94 op/s
Oct 11 04:53:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:25 compute-0 ceph-mon[74243]: osdmap e148: 3 total, 3 up, 3 in
Oct 11 04:53:25 compute-0 ceph-mon[74243]: pgmap v963: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 3.3 KiB/s wr, 94 op/s
Oct 11 04:53:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:53:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:53:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:53:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:53:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:53:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:53:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct 11 04:53:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct 11 04:53:26 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct 11 04:53:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 5.2 KiB/s wr, 129 op/s
Oct 11 04:53:27 compute-0 ceph-mon[74243]: osdmap e149: 3 total, 3 up, 3 in
Oct 11 04:53:27 compute-0 ceph-mon[74243]: pgmap v965: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 5.2 KiB/s wr, 129 op/s
Oct 11 04:53:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct 11 04:53:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct 11 04:53:28 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct 11 04:53:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 4.8 KiB/s wr, 128 op/s
Oct 11 04:53:29 compute-0 ceph-mon[74243]: osdmap e150: 3 total, 3 up, 3 in
Oct 11 04:53:29 compute-0 ceph-mon[74243]: pgmap v967: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 4.8 KiB/s wr, 128 op/s
Oct 11 04:53:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct 11 04:53:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct 11 04:53:30 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct 11 04:53:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.3 KiB/s wr, 48 op/s
Oct 11 04:53:30 compute-0 nova_compute[259400]: 2025-10-11 04:53:30.820 2 DEBUG oslo_concurrency.lockutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Acquiring lock "047de729-a62a-46a9-a27d-e7f184d2ecd0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:53:30 compute-0 nova_compute[259400]: 2025-10-11 04:53:30.821 2 DEBUG oslo_concurrency.lockutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "047de729-a62a-46a9-a27d-e7f184d2ecd0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:53:30 compute-0 nova_compute[259400]: 2025-10-11 04:53:30.840 2 DEBUG nova.compute.manager [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:53:30 compute-0 nova_compute[259400]: 2025-10-11 04:53:30.938 2 DEBUG oslo_concurrency.lockutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:53:30 compute-0 nova_compute[259400]: 2025-10-11 04:53:30.939 2 DEBUG oslo_concurrency.lockutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:53:30 compute-0 nova_compute[259400]: 2025-10-11 04:53:30.952 2 DEBUG nova.virt.hardware [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:53:30 compute-0 nova_compute[259400]: 2025-10-11 04:53:30.952 2 INFO nova.compute.claims [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:53:31 compute-0 nova_compute[259400]: 2025-10-11 04:53:31.098 2 DEBUG oslo_concurrency.processutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:53:31 compute-0 ceph-mon[74243]: osdmap e151: 3 total, 3 up, 3 in
Oct 11 04:53:31 compute-0 ceph-mon[74243]: pgmap v969: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.3 KiB/s wr, 48 op/s
Oct 11 04:53:31 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:53:31 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2318508643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:31 compute-0 nova_compute[259400]: 2025-10-11 04:53:31.554 2 DEBUG oslo_concurrency.processutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:53:31 compute-0 nova_compute[259400]: 2025-10-11 04:53:31.558 2 DEBUG nova.compute.provider_tree [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Inventory has not changed in ProviderTree for provider: 1f05a244-23b6-4149-9b5a-a525e5860d18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:53:31 compute-0 nova_compute[259400]: 2025-10-11 04:53:31.575 2 DEBUG nova.scheduler.client.report [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Inventory has not changed for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:53:31 compute-0 nova_compute[259400]: 2025-10-11 04:53:31.598 2 DEBUG oslo_concurrency.lockutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:53:31 compute-0 nova_compute[259400]: 2025-10-11 04:53:31.599 2 DEBUG nova.compute.manager [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:53:31 compute-0 nova_compute[259400]: 2025-10-11 04:53:31.644 2 DEBUG nova.compute.manager [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:53:31 compute-0 nova_compute[259400]: 2025-10-11 04:53:31.644 2 DEBUG nova.network.neutron [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:53:31 compute-0 nova_compute[259400]: 2025-10-11 04:53:31.672 2 INFO nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:53:31 compute-0 nova_compute[259400]: 2025-10-11 04:53:31.698 2 DEBUG nova.compute.manager [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:53:31 compute-0 nova_compute[259400]: 2025-10-11 04:53:31.824 2 DEBUG nova.compute.manager [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:53:31 compute-0 nova_compute[259400]: 2025-10-11 04:53:31.825 2 DEBUG nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:53:31 compute-0 nova_compute[259400]: 2025-10-11 04:53:31.825 2 INFO nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Creating image(s)
Oct 11 04:53:31 compute-0 nova_compute[259400]: 2025-10-11 04:53:31.846 2 DEBUG nova.storage.rbd_utils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] rbd image 047de729-a62a-46a9-a27d-e7f184d2ecd0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:53:31 compute-0 nova_compute[259400]: 2025-10-11 04:53:31.878 2 DEBUG nova.storage.rbd_utils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] rbd image 047de729-a62a-46a9-a27d-e7f184d2ecd0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:53:31 compute-0 nova_compute[259400]: 2025-10-11 04:53:31.909 2 DEBUG nova.storage.rbd_utils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] rbd image 047de729-a62a-46a9-a27d-e7f184d2ecd0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:53:31 compute-0 nova_compute[259400]: 2025-10-11 04:53:31.913 2 DEBUG oslo_concurrency.lockutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Acquiring lock "cb8026b3b0336121ee0459c2e08e659fbed6f0ef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:53:31 compute-0 nova_compute[259400]: 2025-10-11 04:53:31.914 2 DEBUG oslo_concurrency.lockutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "cb8026b3b0336121ee0459c2e08e659fbed6f0ef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.044 2 DEBUG nova.network.neutron [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.045 2 DEBUG nova.compute.manager [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:53:32 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2318508643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.229 2 DEBUG nova.virt.libvirt.imagebackend [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Image locations are: [{'url': 'rbd://166d0489-2ae7-59eb-961c-c1b5cda4b45a/images/b1f1dad4-f383-45a8-9526-6f23be0f77b8/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://166d0489-2ae7-59eb-961c-c1b5cda4b45a/images/b1f1dad4-f383-45a8-9526-6f23be0f77b8/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.281 2 DEBUG nova.virt.libvirt.imagebackend [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Selected location: {'url': 'rbd://166d0489-2ae7-59eb-961c-c1b5cda4b45a/images/b1f1dad4-f383-45a8-9526-6f23be0f77b8/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.282 2 DEBUG nova.storage.rbd_utils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] cloning images/b1f1dad4-f383-45a8-9526-6f23be0f77b8@snap to None/047de729-a62a-46a9-a27d-e7f184d2ecd0_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.398 2 DEBUG oslo_concurrency.lockutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "cb8026b3b0336121ee0459c2e08e659fbed6f0ef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.541 2 DEBUG nova.storage.rbd_utils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] resizing rbd image 047de729-a62a-46a9-a27d-e7f184d2ecd0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.627 2 DEBUG nova.objects.instance [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lazy-loading 'migration_context' on Instance uuid 047de729-a62a-46a9-a27d-e7f184d2ecd0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.647 2 DEBUG nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.647 2 DEBUG nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Ensure instance console log exists: /var/lib/nova/instances/047de729-a62a-46a9-a27d-e7f184d2ecd0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.648 2 DEBUG oslo_concurrency.lockutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.648 2 DEBUG oslo_concurrency.lockutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.648 2 DEBUG oslo_concurrency.lockutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.650 2 DEBUG nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='cfd88e6366cf793e37f91f9b7e81847d',container_format='bare',created_at=2025-10-11T04:53:26Z,direct_url=<?>,disk_format='raw',id=b1f1dad4-f383-45a8-9526-6f23be0f77b8,min_disk=0,min_ram=0,name='tempest-image-dependency-test-1847397856',owner='c3b70031321444a586c18d0460667752',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-10-11T04:53:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': 'b1f1dad4-f383-45a8-9526-6f23be0f77b8'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.655 2 WARNING nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.660 2 DEBUG nova.virt.libvirt.host [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.660 2 DEBUG nova.virt.libvirt.host [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.664 2 DEBUG nova.virt.libvirt.host [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.665 2 DEBUG nova.virt.libvirt.host [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.666 2 DEBUG nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.666 2 DEBUG nova.virt.hardware [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:51:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='18296d19-1d30-423e-a079-2f3be2925b06',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='cfd88e6366cf793e37f91f9b7e81847d',container_format='bare',created_at=2025-10-11T04:53:26Z,direct_url=<?>,disk_format='raw',id=b1f1dad4-f383-45a8-9526-6f23be0f77b8,min_disk=0,min_ram=0,name='tempest-image-dependency-test-1847397856',owner='c3b70031321444a586c18d0460667752',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-10-11T04:53:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.666 2 DEBUG nova.virt.hardware [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.667 2 DEBUG nova.virt.hardware [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.667 2 DEBUG nova.virt.hardware [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.668 2 DEBUG nova.virt.hardware [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.668 2 DEBUG nova.virt.hardware [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.668 2 DEBUG nova.virt.hardware [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.669 2 DEBUG nova.virt.hardware [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.669 2 DEBUG nova.virt.hardware [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.670 2 DEBUG nova.virt.hardware [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.670 2 DEBUG nova.virt.hardware [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:53:32 compute-0 nova_compute[259400]: 2025-10-11 04:53:32.675 2 DEBUG oslo_concurrency.processutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:53:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 4.3 KiB/s wr, 65 op/s
Oct 11 04:53:33 compute-0 ceph-mon[74243]: pgmap v970: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 4.3 KiB/s wr, 65 op/s
Oct 11 04:53:33 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:33 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/44978983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:33 compute-0 nova_compute[259400]: 2025-10-11 04:53:33.190 2 DEBUG oslo_concurrency.processutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:53:33 compute-0 nova_compute[259400]: 2025-10-11 04:53:33.222 2 DEBUG nova.storage.rbd_utils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] rbd image 047de729-a62a-46a9-a27d-e7f184d2ecd0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:53:33 compute-0 nova_compute[259400]: 2025-10-11 04:53:33.227 2 DEBUG oslo_concurrency.processutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:53:33 compute-0 podman[268412]: 2025-10-11 04:53:33.438226905 +0000 UTC m=+0.082839688 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 11 04:53:33 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:33 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3048952401' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:33 compute-0 nova_compute[259400]: 2025-10-11 04:53:33.661 2 DEBUG oslo_concurrency.processutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:53:33 compute-0 nova_compute[259400]: 2025-10-11 04:53:33.663 2 DEBUG nova.objects.instance [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lazy-loading 'pci_devices' on Instance uuid 047de729-a62a-46a9-a27d-e7f184d2ecd0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:53:33 compute-0 nova_compute[259400]: 2025-10-11 04:53:33.684 2 DEBUG nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:53:33 compute-0 nova_compute[259400]:   <uuid>047de729-a62a-46a9-a27d-e7f184d2ecd0</uuid>
Oct 11 04:53:33 compute-0 nova_compute[259400]:   <name>instance-00000002</name>
Oct 11 04:53:33 compute-0 nova_compute[259400]:   <memory>131072</memory>
Oct 11 04:53:33 compute-0 nova_compute[259400]:   <vcpu>1</vcpu>
Oct 11 04:53:33 compute-0 nova_compute[259400]:   <metadata>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <nova:name>instance-depend-image</nova:name>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <nova:creationTime>2025-10-11 04:53:32</nova:creationTime>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <nova:flavor name="m1.nano">
Oct 11 04:53:33 compute-0 nova_compute[259400]:         <nova:memory>128</nova:memory>
Oct 11 04:53:33 compute-0 nova_compute[259400]:         <nova:disk>1</nova:disk>
Oct 11 04:53:33 compute-0 nova_compute[259400]:         <nova:swap>0</nova:swap>
Oct 11 04:53:33 compute-0 nova_compute[259400]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:53:33 compute-0 nova_compute[259400]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       </nova:flavor>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <nova:owner>
Oct 11 04:53:33 compute-0 nova_compute[259400]:         <nova:user uuid="2b58bbc4ab1b4516a6e6c539429b798d">tempest-ImageDependencyTests-427344473-project-member</nova:user>
Oct 11 04:53:33 compute-0 nova_compute[259400]:         <nova:project uuid="c3b70031321444a586c18d0460667752">tempest-ImageDependencyTests-427344473</nova:project>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       </nova:owner>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <nova:root type="image" uuid="b1f1dad4-f383-45a8-9526-6f23be0f77b8"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <nova:ports/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     </nova:instance>
Oct 11 04:53:33 compute-0 nova_compute[259400]:   </metadata>
Oct 11 04:53:33 compute-0 nova_compute[259400]:   <sysinfo type="smbios">
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <system>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <entry name="serial">047de729-a62a-46a9-a27d-e7f184d2ecd0</entry>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <entry name="uuid">047de729-a62a-46a9-a27d-e7f184d2ecd0</entry>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     </system>
Oct 11 04:53:33 compute-0 nova_compute[259400]:   </sysinfo>
Oct 11 04:53:33 compute-0 nova_compute[259400]:   <os>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <boot dev="hd"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <smbios mode="sysinfo"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:   </os>
Oct 11 04:53:33 compute-0 nova_compute[259400]:   <features>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <acpi/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <apic/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <vmcoreinfo/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:   </features>
Oct 11 04:53:33 compute-0 nova_compute[259400]:   <clock offset="utc">
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <timer name="hpet" present="no"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:   </clock>
Oct 11 04:53:33 compute-0 nova_compute[259400]:   <cpu mode="host-model" match="exact">
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:   </cpu>
Oct 11 04:53:33 compute-0 nova_compute[259400]:   <devices>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <disk type="network" device="disk">
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <driver type="raw" cache="none"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <source protocol="rbd" name="vms/047de729-a62a-46a9-a27d-e7f184d2ecd0_disk">
Oct 11 04:53:33 compute-0 nova_compute[259400]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       </source>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <auth username="openstack">
Oct 11 04:53:33 compute-0 nova_compute[259400]:         <secret type="ceph" uuid="166d0489-2ae7-59eb-961c-c1b5cda4b45a"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       </auth>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <target dev="vda" bus="virtio"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     </disk>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <disk type="network" device="cdrom">
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <driver type="raw" cache="none"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <source protocol="rbd" name="vms/047de729-a62a-46a9-a27d-e7f184d2ecd0_disk.config">
Oct 11 04:53:33 compute-0 nova_compute[259400]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       </source>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <auth username="openstack">
Oct 11 04:53:33 compute-0 nova_compute[259400]:         <secret type="ceph" uuid="166d0489-2ae7-59eb-961c-c1b5cda4b45a"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       </auth>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <target dev="sda" bus="sata"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     </disk>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <serial type="pty">
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <log file="/var/lib/nova/instances/047de729-a62a-46a9-a27d-e7f184d2ecd0/console.log" append="off"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     </serial>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <video>
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <model type="virtio"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     </video>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <input type="tablet" bus="usb"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <rng model="virtio">
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     </rng>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <controller type="usb" index="0"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     <memballoon model="virtio">
Oct 11 04:53:33 compute-0 nova_compute[259400]:       <stats period="10"/>
Oct 11 04:53:33 compute-0 nova_compute[259400]:     </memballoon>
Oct 11 04:53:33 compute-0 nova_compute[259400]:   </devices>
Oct 11 04:53:33 compute-0 nova_compute[259400]: </domain>
Oct 11 04:53:33 compute-0 nova_compute[259400]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:53:33 compute-0 nova_compute[259400]: 2025-10-11 04:53:33.743 2 DEBUG nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:53:33 compute-0 nova_compute[259400]: 2025-10-11 04:53:33.743 2 DEBUG nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:53:33 compute-0 nova_compute[259400]: 2025-10-11 04:53:33.744 2 INFO nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Using config drive
Oct 11 04:53:33 compute-0 nova_compute[259400]: 2025-10-11 04:53:33.779 2 DEBUG nova.storage.rbd_utils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] rbd image 047de729-a62a-46a9-a27d-e7f184d2ecd0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:53:33 compute-0 nova_compute[259400]: 2025-10-11 04:53:33.968 2 INFO nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Creating config drive at /var/lib/nova/instances/047de729-a62a-46a9-a27d-e7f184d2ecd0/disk.config
Oct 11 04:53:33 compute-0 nova_compute[259400]: 2025-10-11 04:53:33.980 2 DEBUG oslo_concurrency.processutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/047de729-a62a-46a9-a27d-e7f184d2ecd0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeapwe67p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:53:34 compute-0 nova_compute[259400]: 2025-10-11 04:53:34.144 2 DEBUG oslo_concurrency.processutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/047de729-a62a-46a9-a27d-e7f184d2ecd0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeapwe67p" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:53:34 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/44978983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:34 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3048952401' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:34 compute-0 nova_compute[259400]: 2025-10-11 04:53:34.171 2 DEBUG nova.storage.rbd_utils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] rbd image 047de729-a62a-46a9-a27d-e7f184d2ecd0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:53:34 compute-0 nova_compute[259400]: 2025-10-11 04:53:34.175 2 DEBUG oslo_concurrency.processutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/047de729-a62a-46a9-a27d-e7f184d2ecd0/disk.config 047de729-a62a-46a9-a27d-e7f184d2ecd0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:53:34 compute-0 nova_compute[259400]: 2025-10-11 04:53:34.337 2 DEBUG oslo_concurrency.processutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/047de729-a62a-46a9-a27d-e7f184d2ecd0/disk.config 047de729-a62a-46a9-a27d-e7f184d2ecd0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:53:34 compute-0 nova_compute[259400]: 2025-10-11 04:53:34.338 2 INFO nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Deleting local config drive /var/lib/nova/instances/047de729-a62a-46a9-a27d-e7f184d2ecd0/disk.config because it was imported into RBD.
Oct 11 04:53:34 compute-0 systemd-machined[214394]: New machine qemu-2-instance-00000002.
Oct 11 04:53:34 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Oct 11 04:53:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.5 KiB/s wr, 33 op/s
Oct 11 04:53:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:35 compute-0 ceph-mon[74243]: pgmap v971: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.5 KiB/s wr, 33 op/s
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.413 2 DEBUG nova.virt.driver [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Emitting event <LifecycleEvent: 1760158415.4135509, 047de729-a62a-46a9-a27d-e7f184d2ecd0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.416 2 INFO nova.compute.manager [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] VM Resumed (Lifecycle Event)
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.422 2 DEBUG nova.compute.manager [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.422 2 DEBUG nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.427 2 INFO nova.virt.libvirt.driver [-] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Instance spawned successfully.
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.428 2 DEBUG nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.472 2 DEBUG nova.compute.manager [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.476 2 DEBUG nova.compute.manager [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.508 2 INFO nova.compute.manager [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.509 2 DEBUG nova.virt.driver [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] Emitting event <LifecycleEvent: 1760158415.415104, 047de729-a62a-46a9-a27d-e7f184d2ecd0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.510 2 INFO nova.compute.manager [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] VM Started (Lifecycle Event)
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.520 2 DEBUG nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.520 2 DEBUG nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.521 2 DEBUG nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.522 2 DEBUG nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.523 2 DEBUG nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.523 2 DEBUG nova.virt.libvirt.driver [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.534 2 DEBUG nova.compute.manager [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.538 2 DEBUG nova.compute.manager [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.565 2 INFO nova.compute.manager [None req-a1f2b475-2e76-45d6-8bbb-14ac83f8002f - - - - - -] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.596 2 INFO nova.compute.manager [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Took 3.77 seconds to spawn the instance on the hypervisor.
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.597 2 DEBUG nova.compute.manager [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.672 2 INFO nova.compute.manager [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Took 4.77 seconds to build instance.
Oct 11 04:53:35 compute-0 nova_compute[259400]: 2025-10-11 04:53:35.699 2 DEBUG oslo_concurrency.lockutils [None req-77642b44-bc40-4dae-ad33-8a17a94db169 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "047de729-a62a-46a9-a27d-e7f184d2ecd0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:53:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 KiB/s wr, 50 op/s
Oct 11 04:53:36 compute-0 ceph-mon[74243]: pgmap v972: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 KiB/s wr, 50 op/s
Oct 11 04:53:37 compute-0 nova_compute[259400]: 2025-10-11 04:53:37.928 2 DEBUG nova.compute.manager [None req-61dbb5aa-c56d-4670-a0a8-7c34517e42fc 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:53:37 compute-0 nova_compute[259400]: 2025-10-11 04:53:37.973 2 INFO nova.compute.manager [None req-61dbb5aa-c56d-4670-a0a8-7c34517e42fc 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] instance snapshotting
Oct 11 04:53:38 compute-0 nova_compute[259400]: 2025-10-11 04:53:38.286 2 INFO nova.virt.libvirt.driver [None req-61dbb5aa-c56d-4670-a0a8-7c34517e42fc 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Beginning live snapshot process
Oct 11 04:53:38 compute-0 nova_compute[259400]: 2025-10-11 04:53:38.432 2 DEBUG nova.storage.rbd_utils [None req-61dbb5aa-c56d-4670-a0a8-7c34517e42fc 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] creating snapshot(de7b025b64ff4007af68b96917443423) on rbd image(047de729-a62a-46a9-a27d-e7f184d2ecd0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 04:53:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Oct 11 04:53:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Oct 11 04:53:38 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Oct 11 04:53:38 compute-0 nova_compute[259400]: 2025-10-11 04:53:38.594 2 DEBUG nova.storage.rbd_utils [None req-61dbb5aa-c56d-4670-a0a8-7c34517e42fc 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] cloning vms/047de729-a62a-46a9-a27d-e7f184d2ecd0_disk@de7b025b64ff4007af68b96917443423 to images/57a78780-5fcc-496e-904d-2982b206732c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 04:53:38 compute-0 nova_compute[259400]: 2025-10-11 04:53:38.702 2 DEBUG nova.storage.rbd_utils [None req-61dbb5aa-c56d-4670-a0a8-7c34517e42fc 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] flattening images/57a78780-5fcc-496e-904d-2982b206732c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 04:53:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 21 KiB/s wr, 81 op/s
Oct 11 04:53:38 compute-0 nova_compute[259400]: 2025-10-11 04:53:38.842 2 DEBUG nova.storage.rbd_utils [None req-61dbb5aa-c56d-4670-a0a8-7c34517e42fc 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] removing snapshot(de7b025b64ff4007af68b96917443423) on rbd image(047de729-a62a-46a9-a27d-e7f184d2ecd0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 04:53:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Oct 11 04:53:39 compute-0 ceph-mon[74243]: osdmap e152: 3 total, 3 up, 3 in
Oct 11 04:53:39 compute-0 ceph-mon[74243]: pgmap v974: 305 pgs: 305 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 21 KiB/s wr, 81 op/s
Oct 11 04:53:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Oct 11 04:53:39 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Oct 11 04:53:39 compute-0 nova_compute[259400]: 2025-10-11 04:53:39.601 2 DEBUG nova.storage.rbd_utils [None req-61dbb5aa-c56d-4670-a0a8-7c34517e42fc 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] creating snapshot(snap) on rbd image(57a78780-5fcc-496e-904d-2982b206732c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 04:53:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Oct 11 04:53:40 compute-0 ceph-mon[74243]: osdmap e153: 3 total, 3 up, 3 in
Oct 11 04:53:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Oct 11 04:53:40 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Oct 11 04:53:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 27 KiB/s wr, 86 op/s
Oct 11 04:53:41 compute-0 ceph-mon[74243]: osdmap e154: 3 total, 3 up, 3 in
Oct 11 04:53:41 compute-0 ceph-mon[74243]: pgmap v977: 305 pgs: 305 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 27 KiB/s wr, 86 op/s
Oct 11 04:53:42 compute-0 nova_compute[259400]: 2025-10-11 04:53:42.311 2 INFO nova.virt.libvirt.driver [None req-61dbb5aa-c56d-4670-a0a8-7c34517e42fc 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Snapshot image upload complete
Oct 11 04:53:42 compute-0 nova_compute[259400]: 2025-10-11 04:53:42.312 2 INFO nova.compute.manager [None req-61dbb5aa-c56d-4670-a0a8-7c34517e42fc 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Took 4.34 seconds to snapshot the instance on the hypervisor.
Oct 11 04:53:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 132 KiB/s rd, 30 KiB/s wr, 164 op/s
Oct 11 04:53:42 compute-0 ceph-mon[74243]: pgmap v978: 305 pgs: 305 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 132 KiB/s rd, 30 KiB/s wr, 164 op/s
Oct 11 04:53:43 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Oct 11 04:53:43 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Oct 11 04:53:43 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Oct 11 04:53:44 compute-0 nova_compute[259400]: 2025-10-11 04:53:44.658 2 DEBUG oslo_concurrency.lockutils [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Acquiring lock "047de729-a62a-46a9-a27d-e7f184d2ecd0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:53:44 compute-0 nova_compute[259400]: 2025-10-11 04:53:44.659 2 DEBUG oslo_concurrency.lockutils [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "047de729-a62a-46a9-a27d-e7f184d2ecd0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:53:44 compute-0 nova_compute[259400]: 2025-10-11 04:53:44.659 2 DEBUG oslo_concurrency.lockutils [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Acquiring lock "047de729-a62a-46a9-a27d-e7f184d2ecd0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:53:44 compute-0 nova_compute[259400]: 2025-10-11 04:53:44.660 2 DEBUG oslo_concurrency.lockutils [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "047de729-a62a-46a9-a27d-e7f184d2ecd0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:53:44 compute-0 nova_compute[259400]: 2025-10-11 04:53:44.660 2 DEBUG oslo_concurrency.lockutils [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "047de729-a62a-46a9-a27d-e7f184d2ecd0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:53:44 compute-0 nova_compute[259400]: 2025-10-11 04:53:44.662 2 INFO nova.compute.manager [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Terminating instance
Oct 11 04:53:44 compute-0 nova_compute[259400]: 2025-10-11 04:53:44.664 2 DEBUG oslo_concurrency.lockutils [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Acquiring lock "refresh_cache-047de729-a62a-46a9-a27d-e7f184d2ecd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:53:44 compute-0 nova_compute[259400]: 2025-10-11 04:53:44.664 2 DEBUG oslo_concurrency.lockutils [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Acquired lock "refresh_cache-047de729-a62a-46a9-a27d-e7f184d2ecd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:53:44 compute-0 nova_compute[259400]: 2025-10-11 04:53:44.664 2 DEBUG nova.network.neutron [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:53:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 5.0 KiB/s wr, 118 op/s
Oct 11 04:53:44 compute-0 ceph-mon[74243]: osdmap e155: 3 total, 3 up, 3 in
Oct 11 04:53:44 compute-0 ceph-mon[74243]: pgmap v980: 305 pgs: 305 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 5.0 KiB/s wr, 118 op/s
Oct 11 04:53:44 compute-0 nova_compute[259400]: 2025-10-11 04:53:44.902 2 DEBUG nova.network.neutron [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:53:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Oct 11 04:53:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Oct 11 04:53:45 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Oct 11 04:53:45 compute-0 nova_compute[259400]: 2025-10-11 04:53:45.249 2 DEBUG nova.network.neutron [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:53:45 compute-0 nova_compute[259400]: 2025-10-11 04:53:45.277 2 DEBUG oslo_concurrency.lockutils [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Releasing lock "refresh_cache-047de729-a62a-46a9-a27d-e7f184d2ecd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:53:45 compute-0 nova_compute[259400]: 2025-10-11 04:53:45.278 2 DEBUG nova.compute.manager [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:53:45 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Oct 11 04:53:45 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 1.318s CPU time.
Oct 11 04:53:45 compute-0 systemd-machined[214394]: Machine qemu-2-instance-00000002 terminated.
Oct 11 04:53:45 compute-0 nova_compute[259400]: 2025-10-11 04:53:45.499 2 INFO nova.virt.libvirt.driver [-] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Instance destroyed successfully.
Oct 11 04:53:45 compute-0 nova_compute[259400]: 2025-10-11 04:53:45.499 2 DEBUG nova.objects.instance [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lazy-loading 'resources' on Instance uuid 047de729-a62a-46a9-a27d-e7f184d2ecd0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:53:46 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Oct 11 04:53:46 compute-0 ceph-mon[74243]: osdmap e156: 3 total, 3 up, 3 in
Oct 11 04:53:46 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Oct 11 04:53:46 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Oct 11 04:53:46 compute-0 nova_compute[259400]: 2025-10-11 04:53:46.460 2 INFO nova.virt.libvirt.driver [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Deleting instance files /var/lib/nova/instances/047de729-a62a-46a9-a27d-e7f184d2ecd0_del
Oct 11 04:53:46 compute-0 nova_compute[259400]: 2025-10-11 04:53:46.461 2 INFO nova.virt.libvirt.driver [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Deletion of /var/lib/nova/instances/047de729-a62a-46a9-a27d-e7f184d2ecd0_del complete
Oct 11 04:53:46 compute-0 nova_compute[259400]: 2025-10-11 04:53:46.543 2 DEBUG nova.virt.libvirt.host [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Oct 11 04:53:46 compute-0 nova_compute[259400]: 2025-10-11 04:53:46.543 2 INFO nova.virt.libvirt.host [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] UEFI support detected
Oct 11 04:53:46 compute-0 nova_compute[259400]: 2025-10-11 04:53:46.546 2 INFO nova.compute.manager [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Took 1.27 seconds to destroy the instance on the hypervisor.
Oct 11 04:53:46 compute-0 nova_compute[259400]: 2025-10-11 04:53:46.547 2 DEBUG oslo.service.loopingcall [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:53:46 compute-0 nova_compute[259400]: 2025-10-11 04:53:46.547 2 DEBUG nova.compute.manager [-] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:53:46 compute-0 nova_compute[259400]: 2025-10-11 04:53:46.548 2 DEBUG nova.network.neutron [-] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:53:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 7.3 KiB/s wr, 157 op/s
Oct 11 04:53:46 compute-0 nova_compute[259400]: 2025-10-11 04:53:46.779 2 DEBUG nova.network.neutron [-] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:53:46 compute-0 nova_compute[259400]: 2025-10-11 04:53:46.797 2 DEBUG nova.network.neutron [-] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:53:46 compute-0 nova_compute[259400]: 2025-10-11 04:53:46.810 2 INFO nova.compute.manager [-] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Took 0.26 seconds to deallocate network for instance.
Oct 11 04:53:46 compute-0 nova_compute[259400]: 2025-10-11 04:53:46.879 2 DEBUG oslo_concurrency.lockutils [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:53:46 compute-0 nova_compute[259400]: 2025-10-11 04:53:46.879 2 DEBUG oslo_concurrency.lockutils [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:53:46 compute-0 nova_compute[259400]: 2025-10-11 04:53:46.984 2 DEBUG oslo_concurrency.processutils [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:53:47 compute-0 ceph-mon[74243]: osdmap e157: 3 total, 3 up, 3 in
Oct 11 04:53:47 compute-0 ceph-mon[74243]: pgmap v983: 305 pgs: 305 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 7.3 KiB/s wr, 157 op/s
Oct 11 04:53:47 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:53:47 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1316146336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:47 compute-0 nova_compute[259400]: 2025-10-11 04:53:47.435 2 DEBUG oslo_concurrency.processutils [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:53:47 compute-0 nova_compute[259400]: 2025-10-11 04:53:47.441 2 DEBUG nova.compute.provider_tree [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Inventory has not changed in ProviderTree for provider: 1f05a244-23b6-4149-9b5a-a525e5860d18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:53:47 compute-0 nova_compute[259400]: 2025-10-11 04:53:47.471 2 DEBUG nova.scheduler.client.report [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Inventory has not changed for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:53:47 compute-0 nova_compute[259400]: 2025-10-11 04:53:47.512 2 DEBUG oslo_concurrency.lockutils [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:53:47 compute-0 nova_compute[259400]: 2025-10-11 04:53:47.550 2 INFO nova.scheduler.client.report [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Deleted allocations for instance 047de729-a62a-46a9-a27d-e7f184d2ecd0
Oct 11 04:53:47 compute-0 nova_compute[259400]: 2025-10-11 04:53:47.681 2 DEBUG oslo_concurrency.lockutils [None req-ad8efb1a-d4cf-4718-9008-b67d431f6412 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "047de729-a62a-46a9-a27d-e7f184d2ecd0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:53:48 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1316146336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 120 KiB/s rd, 6.0 KiB/s wr, 161 op/s
Oct 11 04:53:49 compute-0 ceph-mon[74243]: pgmap v984: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 120 KiB/s rd, 6.0 KiB/s wr, 161 op/s
Oct 11 04:53:49 compute-0 nova_compute[259400]: 2025-10-11 04:53:49.439 2 DEBUG oslo_concurrency.lockutils [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Acquiring lock "cdb2b2db-db93-4afd-8188-bc2b971be88e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:53:49 compute-0 nova_compute[259400]: 2025-10-11 04:53:49.439 2 DEBUG oslo_concurrency.lockutils [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "cdb2b2db-db93-4afd-8188-bc2b971be88e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:53:49 compute-0 nova_compute[259400]: 2025-10-11 04:53:49.440 2 DEBUG oslo_concurrency.lockutils [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Acquiring lock "cdb2b2db-db93-4afd-8188-bc2b971be88e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:53:49 compute-0 nova_compute[259400]: 2025-10-11 04:53:49.440 2 DEBUG oslo_concurrency.lockutils [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "cdb2b2db-db93-4afd-8188-bc2b971be88e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:53:49 compute-0 nova_compute[259400]: 2025-10-11 04:53:49.441 2 DEBUG oslo_concurrency.lockutils [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "cdb2b2db-db93-4afd-8188-bc2b971be88e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:53:49 compute-0 nova_compute[259400]: 2025-10-11 04:53:49.442 2 INFO nova.compute.manager [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Terminating instance
Oct 11 04:53:49 compute-0 nova_compute[259400]: 2025-10-11 04:53:49.444 2 DEBUG oslo_concurrency.lockutils [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Acquiring lock "refresh_cache-cdb2b2db-db93-4afd-8188-bc2b971be88e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:53:49 compute-0 nova_compute[259400]: 2025-10-11 04:53:49.445 2 DEBUG oslo_concurrency.lockutils [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Acquired lock "refresh_cache-cdb2b2db-db93-4afd-8188-bc2b971be88e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:53:49 compute-0 nova_compute[259400]: 2025-10-11 04:53:49.445 2 DEBUG nova.network.neutron [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:53:49 compute-0 nova_compute[259400]: 2025-10-11 04:53:49.834 2 DEBUG nova.network.neutron [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:53:50 compute-0 nova_compute[259400]: 2025-10-11 04:53:50.115 2 DEBUG nova.network.neutron [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:53:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Oct 11 04:53:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Oct 11 04:53:50 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Oct 11 04:53:50 compute-0 nova_compute[259400]: 2025-10-11 04:53:50.138 2 DEBUG oslo_concurrency.lockutils [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Releasing lock "refresh_cache-cdb2b2db-db93-4afd-8188-bc2b971be88e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:53:50 compute-0 nova_compute[259400]: 2025-10-11 04:53:50.139 2 DEBUG nova.compute.manager [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:53:50 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Oct 11 04:53:50 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 1.395s CPU time.
Oct 11 04:53:50 compute-0 systemd-machined[214394]: Machine qemu-1-instance-00000001 terminated.
Oct 11 04:53:50 compute-0 nova_compute[259400]: 2025-10-11 04:53:50.359 2 INFO nova.virt.libvirt.driver [-] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Instance destroyed successfully.
Oct 11 04:53:50 compute-0 nova_compute[259400]: 2025-10-11 04:53:50.360 2 DEBUG nova.objects.instance [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lazy-loading 'resources' on Instance uuid cdb2b2db-db93-4afd-8188-bc2b971be88e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:53:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 5.7 KiB/s wr, 151 op/s
Oct 11 04:53:50 compute-0 nova_compute[259400]: 2025-10-11 04:53:50.805 2 INFO nova.virt.libvirt.driver [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Deleting instance files /var/lib/nova/instances/cdb2b2db-db93-4afd-8188-bc2b971be88e_del
Oct 11 04:53:50 compute-0 nova_compute[259400]: 2025-10-11 04:53:50.806 2 INFO nova.virt.libvirt.driver [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Deletion of /var/lib/nova/instances/cdb2b2db-db93-4afd-8188-bc2b971be88e_del complete
Oct 11 04:53:50 compute-0 nova_compute[259400]: 2025-10-11 04:53:50.865 2 INFO nova.compute.manager [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 11 04:53:50 compute-0 nova_compute[259400]: 2025-10-11 04:53:50.866 2 DEBUG oslo.service.loopingcall [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:53:50 compute-0 nova_compute[259400]: 2025-10-11 04:53:50.867 2 DEBUG nova.compute.manager [-] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:53:50 compute-0 nova_compute[259400]: 2025-10-11 04:53:50.867 2 DEBUG nova.network.neutron [-] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:53:51 compute-0 ceph-mon[74243]: osdmap e158: 3 total, 3 up, 3 in
Oct 11 04:53:51 compute-0 ceph-mon[74243]: pgmap v986: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 5.7 KiB/s wr, 151 op/s
Oct 11 04:53:51 compute-0 nova_compute[259400]: 2025-10-11 04:53:51.906 2 DEBUG nova.network.neutron [-] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:53:51 compute-0 nova_compute[259400]: 2025-10-11 04:53:51.928 2 DEBUG nova.network.neutron [-] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:53:51 compute-0 nova_compute[259400]: 2025-10-11 04:53:51.949 2 INFO nova.compute.manager [-] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Took 1.08 seconds to deallocate network for instance.
Oct 11 04:53:52 compute-0 nova_compute[259400]: 2025-10-11 04:53:52.210 2 INFO nova.compute.manager [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Took 0.26 seconds to detach 1 volumes for instance.
Oct 11 04:53:52 compute-0 nova_compute[259400]: 2025-10-11 04:53:52.212 2 DEBUG nova.compute.manager [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Deleting volume: 20e340cd-bd3f-4374-a8aa-31d1836505e7 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Oct 11 04:53:52 compute-0 nova_compute[259400]: 2025-10-11 04:53:52.513 2 DEBUG oslo_concurrency.lockutils [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:53:52 compute-0 nova_compute[259400]: 2025-10-11 04:53:52.514 2 DEBUG oslo_concurrency.lockutils [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:53:52 compute-0 nova_compute[259400]: 2025-10-11 04:53:52.571 2 DEBUG oslo_concurrency.processutils [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:53:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 5.4 KiB/s wr, 126 op/s
Oct 11 04:53:52 compute-0 ceph-mon[74243]: pgmap v987: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 5.4 KiB/s wr, 126 op/s
Oct 11 04:53:53 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:53:53 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3390378504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:53 compute-0 nova_compute[259400]: 2025-10-11 04:53:53.019 2 DEBUG oslo_concurrency.processutils [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:53:53 compute-0 nova_compute[259400]: 2025-10-11 04:53:53.028 2 DEBUG nova.compute.provider_tree [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Inventory has not changed in ProviderTree for provider: 1f05a244-23b6-4149-9b5a-a525e5860d18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:53:53 compute-0 nova_compute[259400]: 2025-10-11 04:53:53.068 2 DEBUG nova.scheduler.client.report [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Inventory has not changed for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:53:53 compute-0 nova_compute[259400]: 2025-10-11 04:53:53.100 2 DEBUG oslo_concurrency.lockutils [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:53:53 compute-0 nova_compute[259400]: 2025-10-11 04:53:53.144 2 INFO nova.scheduler.client.report [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Deleted allocations for instance cdb2b2db-db93-4afd-8188-bc2b971be88e
Oct 11 04:53:53 compute-0 nova_compute[259400]: 2025-10-11 04:53:53.222 2 DEBUG oslo_concurrency.lockutils [None req-8ca8d317-6435-46a6-99bc-03beaa2df974 2b58bbc4ab1b4516a6e6c539429b798d c3b70031321444a586c18d0460667752 - - default default] Lock "cdb2b2db-db93-4afd-8188-bc2b971be88e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:53:53 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Oct 11 04:53:53 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3390378504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:53 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Oct 11 04:53:53 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Oct 11 04:53:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:53:54 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2164982652' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:53:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:53:54 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2164982652' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:53:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 3.7 KiB/s wr, 128 op/s
Oct 11 04:53:54 compute-0 ceph-mon[74243]: osdmap e159: 3 total, 3 up, 3 in
Oct 11 04:53:54 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/2164982652' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:53:54 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/2164982652' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:53:54 compute-0 ceph-mon[74243]: pgmap v989: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 3.7 KiB/s wr, 128 op/s
Oct 11 04:53:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:55 compute-0 podman[268796]: 2025-10-11 04:53:55.424108846 +0000 UTC m=+0.074927899 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251009)
Oct 11 04:53:55 compute-0 podman[268795]: 2025-10-11 04:53:55.430896507 +0000 UTC m=+0.085225068 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:53:55 compute-0 podman[268794]: 2025-10-11 04:53:55.444237861 +0000 UTC m=+0.100147332 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:53:56
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['.mgr', 'backups', 'vms', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta']
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:53:56 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:53:56.621 161813 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:88:88', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '96:43:b2:79:d5:95'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:53:56 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:53:56.622 161813 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:53:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.6 KiB/s wr, 48 op/s
Oct 11 04:53:56 compute-0 ceph-mon[74243]: pgmap v990: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.6 KiB/s wr, 48 op/s
Oct 11 04:53:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.6 KiB/s wr, 48 op/s
Oct 11 04:53:58 compute-0 ceph-mon[74243]: pgmap v991: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.6 KiB/s wr, 48 op/s
Oct 11 04:54:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Oct 11 04:54:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Oct 11 04:54:00 compute-0 ceph-mon[74243]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Oct 11 04:54:00 compute-0 nova_compute[259400]: 2025-10-11 04:54:00.498 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760158425.4954166, 047de729-a62a-46a9-a27d-e7f184d2ecd0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:54:00 compute-0 nova_compute[259400]: 2025-10-11 04:54:00.498 2 INFO nova.compute.manager [-] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] VM Stopped (Lifecycle Event)
Oct 11 04:54:00 compute-0 nova_compute[259400]: 2025-10-11 04:54:00.523 2 DEBUG nova.compute.manager [None req-ce23b06a-a226-45e1-ad4f-6029940148ed - - - - - -] [instance: 047de729-a62a-46a9-a27d-e7f184d2ecd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:54:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 895 B/s wr, 45 op/s
Oct 11 04:54:01 compute-0 ceph-mon[74243]: osdmap e160: 3 total, 3 up, 3 in
Oct 11 04:54:01 compute-0 ceph-mon[74243]: pgmap v993: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 895 B/s wr, 45 op/s
Oct 11 04:54:02 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:54:02.625 161813 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2ff6420e-86e1-487c-bef9-adac80b75ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:54:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 459 B/s wr, 7 op/s
Oct 11 04:54:02 compute-0 ceph-mon[74243]: pgmap v994: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 459 B/s wr, 7 op/s
Oct 11 04:54:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:54:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3383147384' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:54:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:54:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3383147384' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:54:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/3383147384' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:54:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/3383147384' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:54:04 compute-0 podman[268856]: 2025-10-11 04:54:04.430696718 +0000 UTC m=+0.082785396 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 04:54:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 409 B/s wr, 6 op/s
Oct 11 04:54:04 compute-0 ceph-mon[74243]: pgmap v995: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 409 B/s wr, 6 op/s
Oct 11 04:54:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:05 compute-0 nova_compute[259400]: 2025-10-11 04:54:05.357 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760158430.3559942, cdb2b2db-db93-4afd-8188-bc2b971be88e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:54:05 compute-0 nova_compute[259400]: 2025-10-11 04:54:05.358 2 INFO nova.compute.manager [-] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] VM Stopped (Lifecycle Event)
Oct 11 04:54:05 compute-0 nova_compute[259400]: 2025-10-11 04:54:05.394 2 DEBUG nova.compute.manager [None req-9c0dc29e-220d-4fff-85dc-cd730e9e5b77 - - - - - -] [instance: cdb2b2db-db93-4afd-8188-bc2b971be88e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:54:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 102 B/s wr, 3 op/s
Oct 11 04:54:06 compute-0 ceph-mon[74243]: pgmap v996: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 102 B/s wr, 3 op/s
Oct 11 04:54:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:08 compute-0 ceph-mon[74243]: pgmap v997: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:10 compute-0 sudo[268876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:54:10 compute-0 sudo[268876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:54:10 compute-0 sudo[268876]: pam_unix(sudo:session): session closed for user root
Oct 11 04:54:10 compute-0 sudo[268901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:54:10 compute-0 sudo[268901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:54:10 compute-0 sudo[268901]: pam_unix(sudo:session): session closed for user root
Oct 11 04:54:10 compute-0 sudo[268926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:54:10 compute-0 sudo[268926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:54:10 compute-0 sudo[268926]: pam_unix(sudo:session): session closed for user root
Oct 11 04:54:10 compute-0 sudo[268951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:54:10 compute-0 sudo[268951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:54:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:10 compute-0 ceph-mon[74243]: pgmap v998: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:54:11.017 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:54:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:54:11.017 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:54:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:54:11.018 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:54:11 compute-0 sudo[268951]: pam_unix(sudo:session): session closed for user root
Oct 11 04:54:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:54:11 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:54:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:54:11 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:54:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:54:11 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:54:11 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev c799365e-672e-4374-aaac-cbed6fbda807 does not exist
Oct 11 04:54:11 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev f78a3110-f9a2-487b-abfb-6663307a2595 does not exist
Oct 11 04:54:11 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 98e4b450-677e-4a2a-9b32-d94bfd4ed40b does not exist
Oct 11 04:54:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:54:11 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:54:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:54:11 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:54:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:54:11 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:54:11 compute-0 sudo[269007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:54:11 compute-0 sudo[269007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:54:11 compute-0 sudo[269007]: pam_unix(sudo:session): session closed for user root
Oct 11 04:54:11 compute-0 sudo[269032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:54:11 compute-0 sudo[269032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:54:11 compute-0 sudo[269032]: pam_unix(sudo:session): session closed for user root
Oct 11 04:54:11 compute-0 sudo[269057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:54:11 compute-0 sudo[269057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:54:11 compute-0 sudo[269057]: pam_unix(sudo:session): session closed for user root
Oct 11 04:54:11 compute-0 sudo[269082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:54:11 compute-0 sudo[269082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:54:11 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:54:11 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:54:11 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:54:11 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:54:11 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:54:11 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:54:11 compute-0 podman[269149]: 2025-10-11 04:54:11.853461465 +0000 UTC m=+0.041359948 container create 3acacb79a78210ac1b53339a9a94df49fd3f3519732a58f2a48531cd448ecc3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:54:11 compute-0 systemd[1]: Started libpod-conmon-3acacb79a78210ac1b53339a9a94df49fd3f3519732a58f2a48531cd448ecc3a.scope.
Oct 11 04:54:11 compute-0 podman[269149]: 2025-10-11 04:54:11.835582937 +0000 UTC m=+0.023481450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:54:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:54:11 compute-0 podman[269149]: 2025-10-11 04:54:11.954018397 +0000 UTC m=+0.141916920 container init 3acacb79a78210ac1b53339a9a94df49fd3f3519732a58f2a48531cd448ecc3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_rhodes, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:54:11 compute-0 podman[269149]: 2025-10-11 04:54:11.96214869 +0000 UTC m=+0.150047173 container start 3acacb79a78210ac1b53339a9a94df49fd3f3519732a58f2a48531cd448ecc3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_rhodes, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:54:11 compute-0 podman[269149]: 2025-10-11 04:54:11.965470364 +0000 UTC m=+0.153368907 container attach 3acacb79a78210ac1b53339a9a94df49fd3f3519732a58f2a48531cd448ecc3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:54:11 compute-0 upbeat_rhodes[269165]: 167 167
Oct 11 04:54:11 compute-0 podman[269149]: 2025-10-11 04:54:11.970240793 +0000 UTC m=+0.158139306 container died 3acacb79a78210ac1b53339a9a94df49fd3f3519732a58f2a48531cd448ecc3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_rhodes, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 04:54:11 compute-0 systemd[1]: libpod-3acacb79a78210ac1b53339a9a94df49fd3f3519732a58f2a48531cd448ecc3a.scope: Deactivated successfully.
Oct 11 04:54:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-eac217d4f8367e8d0b584f79dfe256d7a0e429664b902783605d3351355339c1-merged.mount: Deactivated successfully.
Oct 11 04:54:12 compute-0 podman[269149]: 2025-10-11 04:54:12.016683058 +0000 UTC m=+0.204581551 container remove 3acacb79a78210ac1b53339a9a94df49fd3f3519732a58f2a48531cd448ecc3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_rhodes, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:54:12 compute-0 systemd[1]: libpod-conmon-3acacb79a78210ac1b53339a9a94df49fd3f3519732a58f2a48531cd448ecc3a.scope: Deactivated successfully.
Oct 11 04:54:12 compute-0 podman[269189]: 2025-10-11 04:54:12.181706085 +0000 UTC m=+0.039451100 container create a2ccb62eff82280483ef189b4b6fae5473d8080cd1304e8bdc202e0df5f498ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:54:12 compute-0 systemd[1]: Started libpod-conmon-a2ccb62eff82280483ef189b4b6fae5473d8080cd1304e8bdc202e0df5f498ce.scope.
Oct 11 04:54:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:54:12 compute-0 podman[269189]: 2025-10-11 04:54:12.161871588 +0000 UTC m=+0.019616573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc51c47b3d9d68dd0c251012b8df5bef6d7edce5a6d5affc937371f2fbebfeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc51c47b3d9d68dd0c251012b8df5bef6d7edce5a6d5affc937371f2fbebfeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc51c47b3d9d68dd0c251012b8df5bef6d7edce5a6d5affc937371f2fbebfeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc51c47b3d9d68dd0c251012b8df5bef6d7edce5a6d5affc937371f2fbebfeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc51c47b3d9d68dd0c251012b8df5bef6d7edce5a6d5affc937371f2fbebfeb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:12 compute-0 podman[269189]: 2025-10-11 04:54:12.284971284 +0000 UTC m=+0.142716319 container init a2ccb62eff82280483ef189b4b6fae5473d8080cd1304e8bdc202e0df5f498ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:54:12 compute-0 podman[269189]: 2025-10-11 04:54:12.29559112 +0000 UTC m=+0.153336075 container start a2ccb62eff82280483ef189b4b6fae5473d8080cd1304e8bdc202e0df5f498ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct 11 04:54:12 compute-0 podman[269189]: 2025-10-11 04:54:12.299059197 +0000 UTC m=+0.156804252 container attach a2ccb62eff82280483ef189b4b6fae5473d8080cd1304e8bdc202e0df5f498ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:54:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:12 compute-0 ceph-mon[74243]: pgmap v999: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:13 compute-0 festive_mclean[269205]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:54:13 compute-0 festive_mclean[269205]: --> relative data size: 1.0
Oct 11 04:54:13 compute-0 festive_mclean[269205]: --> All data devices are unavailable
Oct 11 04:54:13 compute-0 systemd[1]: libpod-a2ccb62eff82280483ef189b4b6fae5473d8080cd1304e8bdc202e0df5f498ce.scope: Deactivated successfully.
Oct 11 04:54:13 compute-0 podman[269189]: 2025-10-11 04:54:13.308056354 +0000 UTC m=+1.165801349 container died a2ccb62eff82280483ef189b4b6fae5473d8080cd1304e8bdc202e0df5f498ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mclean, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:54:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdc51c47b3d9d68dd0c251012b8df5bef6d7edce5a6d5affc937371f2fbebfeb-merged.mount: Deactivated successfully.
Oct 11 04:54:13 compute-0 podman[269189]: 2025-10-11 04:54:13.385174827 +0000 UTC m=+1.242919792 container remove a2ccb62eff82280483ef189b4b6fae5473d8080cd1304e8bdc202e0df5f498ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:54:13 compute-0 systemd[1]: libpod-conmon-a2ccb62eff82280483ef189b4b6fae5473d8080cd1304e8bdc202e0df5f498ce.scope: Deactivated successfully.
Oct 11 04:54:13 compute-0 sudo[269082]: pam_unix(sudo:session): session closed for user root
Oct 11 04:54:13 compute-0 sudo[269246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:54:13 compute-0 sudo[269246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:54:13 compute-0 sudo[269246]: pam_unix(sudo:session): session closed for user root
Oct 11 04:54:13 compute-0 sudo[269271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:54:13 compute-0 sudo[269271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:54:13 compute-0 sudo[269271]: pam_unix(sudo:session): session closed for user root
Oct 11 04:54:13 compute-0 sudo[269296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:54:13 compute-0 sudo[269296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:54:13 compute-0 sudo[269296]: pam_unix(sudo:session): session closed for user root
Oct 11 04:54:13 compute-0 sudo[269321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:54:13 compute-0 sudo[269321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:54:14 compute-0 podman[269386]: 2025-10-11 04:54:14.223283049 +0000 UTC m=+0.062442876 container create 99d01a4fc5404e71e595c47684e999ebc27656f61635cc29ede21b5c2679a4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 04:54:14 compute-0 systemd[1]: Started libpod-conmon-99d01a4fc5404e71e595c47684e999ebc27656f61635cc29ede21b5c2679a4dc.scope.
Oct 11 04:54:14 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:54:14 compute-0 podman[269386]: 2025-10-11 04:54:14.205506623 +0000 UTC m=+0.044666430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:54:14 compute-0 podman[269386]: 2025-10-11 04:54:14.320036215 +0000 UTC m=+0.159196092 container init 99d01a4fc5404e71e595c47684e999ebc27656f61635cc29ede21b5c2679a4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_raman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:54:14 compute-0 podman[269386]: 2025-10-11 04:54:14.330132098 +0000 UTC m=+0.169291875 container start 99d01a4fc5404e71e595c47684e999ebc27656f61635cc29ede21b5c2679a4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_raman, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:54:14 compute-0 podman[269386]: 2025-10-11 04:54:14.334310072 +0000 UTC m=+0.173469940 container attach 99d01a4fc5404e71e595c47684e999ebc27656f61635cc29ede21b5c2679a4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 04:54:14 compute-0 tender_raman[269402]: 167 167
Oct 11 04:54:14 compute-0 systemd[1]: libpod-99d01a4fc5404e71e595c47684e999ebc27656f61635cc29ede21b5c2679a4dc.scope: Deactivated successfully.
Oct 11 04:54:14 compute-0 podman[269386]: 2025-10-11 04:54:14.338547109 +0000 UTC m=+0.177706936 container died 99d01a4fc5404e71e595c47684e999ebc27656f61635cc29ede21b5c2679a4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:54:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b66fd699a59f514e82eb93be8d523ac338caa4adc9ee0288951cb20224204631-merged.mount: Deactivated successfully.
Oct 11 04:54:14 compute-0 podman[269386]: 2025-10-11 04:54:14.389230789 +0000 UTC m=+0.228390606 container remove 99d01a4fc5404e71e595c47684e999ebc27656f61635cc29ede21b5c2679a4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_raman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:54:14 compute-0 systemd[1]: libpod-conmon-99d01a4fc5404e71e595c47684e999ebc27656f61635cc29ede21b5c2679a4dc.scope: Deactivated successfully.
Oct 11 04:54:14 compute-0 podman[269427]: 2025-10-11 04:54:14.599969093 +0000 UTC m=+0.042240030 container create e157fc71ec2111b8eb65592b3d9085c948d7755c4bdf917696697cdf26ea2105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 04:54:14 compute-0 systemd[1]: Started libpod-conmon-e157fc71ec2111b8eb65592b3d9085c948d7755c4bdf917696697cdf26ea2105.scope.
Oct 11 04:54:14 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:54:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79978df736413ef2344a4e813a606bf5cccf68ba744f6adc3e54d2eb0585d92e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79978df736413ef2344a4e813a606bf5cccf68ba744f6adc3e54d2eb0585d92e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79978df736413ef2344a4e813a606bf5cccf68ba744f6adc3e54d2eb0585d92e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:14 compute-0 podman[269427]: 2025-10-11 04:54:14.579741666 +0000 UTC m=+0.022012603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:54:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79978df736413ef2344a4e813a606bf5cccf68ba744f6adc3e54d2eb0585d92e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:14 compute-0 podman[269427]: 2025-10-11 04:54:14.689366504 +0000 UTC m=+0.131637491 container init e157fc71ec2111b8eb65592b3d9085c948d7755c4bdf917696697cdf26ea2105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 11 04:54:14 compute-0 podman[269427]: 2025-10-11 04:54:14.703634532 +0000 UTC m=+0.145905479 container start e157fc71ec2111b8eb65592b3d9085c948d7755c4bdf917696697cdf26ea2105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:54:14 compute-0 podman[269427]: 2025-10-11 04:54:14.708449683 +0000 UTC m=+0.150720620 container attach e157fc71ec2111b8eb65592b3d9085c948d7755c4bdf917696697cdf26ea2105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:54:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:14 compute-0 ceph-mon[74243]: pgmap v1000: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:15 compute-0 gallant_jang[269444]: {
Oct 11 04:54:15 compute-0 gallant_jang[269444]:     "0": [
Oct 11 04:54:15 compute-0 gallant_jang[269444]:         {
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "devices": [
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "/dev/loop3"
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             ],
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "lv_name": "ceph_lv0",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "lv_size": "21470642176",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "name": "ceph_lv0",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "tags": {
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.cluster_name": "ceph",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.crush_device_class": "",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.encrypted": "0",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.osd_id": "0",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.type": "block",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.vdo": "0"
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             },
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "type": "block",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "vg_name": "ceph_vg0"
Oct 11 04:54:15 compute-0 gallant_jang[269444]:         }
Oct 11 04:54:15 compute-0 gallant_jang[269444]:     ],
Oct 11 04:54:15 compute-0 gallant_jang[269444]:     "1": [
Oct 11 04:54:15 compute-0 gallant_jang[269444]:         {
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "devices": [
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "/dev/loop4"
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             ],
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "lv_name": "ceph_lv1",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "lv_size": "21470642176",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "name": "ceph_lv1",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "tags": {
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.cluster_name": "ceph",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.crush_device_class": "",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.encrypted": "0",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.osd_id": "1",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.type": "block",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.vdo": "0"
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             },
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "type": "block",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "vg_name": "ceph_vg1"
Oct 11 04:54:15 compute-0 gallant_jang[269444]:         }
Oct 11 04:54:15 compute-0 gallant_jang[269444]:     ],
Oct 11 04:54:15 compute-0 gallant_jang[269444]:     "2": [
Oct 11 04:54:15 compute-0 gallant_jang[269444]:         {
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "devices": [
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "/dev/loop5"
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             ],
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "lv_name": "ceph_lv2",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "lv_size": "21470642176",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "name": "ceph_lv2",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "tags": {
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.cluster_name": "ceph",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.crush_device_class": "",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.encrypted": "0",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.osd_id": "2",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.type": "block",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:                 "ceph.vdo": "0"
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             },
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "type": "block",
Oct 11 04:54:15 compute-0 gallant_jang[269444]:             "vg_name": "ceph_vg2"
Oct 11 04:54:15 compute-0 gallant_jang[269444]:         }
Oct 11 04:54:15 compute-0 gallant_jang[269444]:     ]
Oct 11 04:54:15 compute-0 gallant_jang[269444]: }
Oct 11 04:54:15 compute-0 systemd[1]: libpod-e157fc71ec2111b8eb65592b3d9085c948d7755c4bdf917696697cdf26ea2105.scope: Deactivated successfully.
Oct 11 04:54:15 compute-0 podman[269453]: 2025-10-11 04:54:15.503312571 +0000 UTC m=+0.035697666 container died e157fc71ec2111b8eb65592b3d9085c948d7755c4bdf917696697cdf26ea2105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:54:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-79978df736413ef2344a4e813a606bf5cccf68ba744f6adc3e54d2eb0585d92e-merged.mount: Deactivated successfully.
Oct 11 04:54:15 compute-0 podman[269453]: 2025-10-11 04:54:15.568002823 +0000 UTC m=+0.100387908 container remove e157fc71ec2111b8eb65592b3d9085c948d7755c4bdf917696697cdf26ea2105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:54:15 compute-0 systemd[1]: libpod-conmon-e157fc71ec2111b8eb65592b3d9085c948d7755c4bdf917696697cdf26ea2105.scope: Deactivated successfully.
Oct 11 04:54:15 compute-0 sudo[269321]: pam_unix(sudo:session): session closed for user root
Oct 11 04:54:15 compute-0 sudo[269468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:54:15 compute-0 sudo[269468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:54:15 compute-0 sudo[269468]: pam_unix(sudo:session): session closed for user root
Oct 11 04:54:15 compute-0 sudo[269493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:54:15 compute-0 sudo[269493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:54:15 compute-0 sudo[269493]: pam_unix(sudo:session): session closed for user root
Oct 11 04:54:15 compute-0 sudo[269518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:54:15 compute-0 sudo[269518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:54:15 compute-0 sudo[269518]: pam_unix(sudo:session): session closed for user root
Oct 11 04:54:15 compute-0 sudo[269543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:54:15 compute-0 sudo[269543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:54:16 compute-0 nova_compute[259400]: 2025-10-11 04:54:16.192 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:54:16 compute-0 nova_compute[259400]: 2025-10-11 04:54:16.216 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:54:16 compute-0 nova_compute[259400]: 2025-10-11 04:54:16.217 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:54:16 compute-0 nova_compute[259400]: 2025-10-11 04:54:16.217 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:54:16 compute-0 nova_compute[259400]: 2025-10-11 04:54:16.233 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:54:16 compute-0 podman[269609]: 2025-10-11 04:54:16.439200615 +0000 UTC m=+0.072436717 container create 590dc5ac8fcc063ad978d79d616697d1113413674c704efeeba59ba890222bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_rosalind, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:54:16 compute-0 systemd[1]: Started libpod-conmon-590dc5ac8fcc063ad978d79d616697d1113413674c704efeeba59ba890222bf5.scope.
Oct 11 04:54:16 compute-0 podman[269609]: 2025-10-11 04:54:16.409683375 +0000 UTC m=+0.042919547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:54:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:54:16 compute-0 podman[269609]: 2025-10-11 04:54:16.542938306 +0000 UTC m=+0.176174468 container init 590dc5ac8fcc063ad978d79d616697d1113413674c704efeeba59ba890222bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 04:54:16 compute-0 podman[269609]: 2025-10-11 04:54:16.552959877 +0000 UTC m=+0.186195959 container start 590dc5ac8fcc063ad978d79d616697d1113413674c704efeeba59ba890222bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 04:54:16 compute-0 silly_rosalind[269626]: 167 167
Oct 11 04:54:16 compute-0 systemd[1]: libpod-590dc5ac8fcc063ad978d79d616697d1113413674c704efeeba59ba890222bf5.scope: Deactivated successfully.
Oct 11 04:54:16 compute-0 conmon[269626]: conmon 590dc5ac8fcc063ad978 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-590dc5ac8fcc063ad978d79d616697d1113413674c704efeeba59ba890222bf5.scope/container/memory.events
Oct 11 04:54:16 compute-0 podman[269609]: 2025-10-11 04:54:16.560216029 +0000 UTC m=+0.193452191 container attach 590dc5ac8fcc063ad978d79d616697d1113413674c704efeeba59ba890222bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:54:16 compute-0 podman[269609]: 2025-10-11 04:54:16.561265815 +0000 UTC m=+0.194501937 container died 590dc5ac8fcc063ad978d79d616697d1113413674c704efeeba59ba890222bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:54:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-61db0a3d2f429c415fdaa33c5986a3270dfabf65fe8197431951f32ffa9dda07-merged.mount: Deactivated successfully.
Oct 11 04:54:16 compute-0 podman[269609]: 2025-10-11 04:54:16.61050686 +0000 UTC m=+0.243742972 container remove 590dc5ac8fcc063ad978d79d616697d1113413674c704efeeba59ba890222bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:54:16 compute-0 systemd[1]: libpod-conmon-590dc5ac8fcc063ad978d79d616697d1113413674c704efeeba59ba890222bf5.scope: Deactivated successfully.
Oct 11 04:54:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:16 compute-0 ceph-mon[74243]: pgmap v1001: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:16 compute-0 podman[269650]: 2025-10-11 04:54:16.863927483 +0000 UTC m=+0.073497623 container create ba2cfed1cb44f76a6d50edf17e04531ddd93767d8ba08b99c626f6ff2d93e49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:54:16 compute-0 systemd[1]: Started libpod-conmon-ba2cfed1cb44f76a6d50edf17e04531ddd93767d8ba08b99c626f6ff2d93e49b.scope.
Oct 11 04:54:16 compute-0 podman[269650]: 2025-10-11 04:54:16.835437749 +0000 UTC m=+0.045007949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:54:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/621eb8691d6cebe05d2a7cdb2e8953259009681a786e61af7308e97e7926e371/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/621eb8691d6cebe05d2a7cdb2e8953259009681a786e61af7308e97e7926e371/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/621eb8691d6cebe05d2a7cdb2e8953259009681a786e61af7308e97e7926e371/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/621eb8691d6cebe05d2a7cdb2e8953259009681a786e61af7308e97e7926e371/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:16 compute-0 podman[269650]: 2025-10-11 04:54:16.971640134 +0000 UTC m=+0.181210324 container init ba2cfed1cb44f76a6d50edf17e04531ddd93767d8ba08b99c626f6ff2d93e49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ishizaka, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:54:16 compute-0 podman[269650]: 2025-10-11 04:54:16.984552778 +0000 UTC m=+0.194122928 container start ba2cfed1cb44f76a6d50edf17e04531ddd93767d8ba08b99c626f6ff2d93e49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ishizaka, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:54:16 compute-0 podman[269650]: 2025-10-11 04:54:16.988773153 +0000 UTC m=+0.198343293 container attach ba2cfed1cb44f76a6d50edf17e04531ddd93767d8ba08b99c626f6ff2d93e49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ishizaka, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:54:17 compute-0 nova_compute[259400]: 2025-10-11 04:54:17.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]: {
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:         "osd_id": 1,
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:         "type": "bluestore"
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:     },
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:         "osd_id": 0,
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:         "type": "bluestore"
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:     },
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:         "osd_id": 2,
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:         "type": "bluestore"
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]:     }
Oct 11 04:54:17 compute-0 loving_ishizaka[269666]: }
Oct 11 04:54:18 compute-0 systemd[1]: libpod-ba2cfed1cb44f76a6d50edf17e04531ddd93767d8ba08b99c626f6ff2d93e49b.scope: Deactivated successfully.
Oct 11 04:54:18 compute-0 podman[269650]: 2025-10-11 04:54:18.036770447 +0000 UTC m=+1.246340547 container died ba2cfed1cb44f76a6d50edf17e04531ddd93767d8ba08b99c626f6ff2d93e49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:54:18 compute-0 systemd[1]: libpod-ba2cfed1cb44f76a6d50edf17e04531ddd93767d8ba08b99c626f6ff2d93e49b.scope: Consumed 1.059s CPU time.
Oct 11 04:54:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-621eb8691d6cebe05d2a7cdb2e8953259009681a786e61af7308e97e7926e371-merged.mount: Deactivated successfully.
Oct 11 04:54:18 compute-0 podman[269650]: 2025-10-11 04:54:18.102872974 +0000 UTC m=+1.312443064 container remove ba2cfed1cb44f76a6d50edf17e04531ddd93767d8ba08b99c626f6ff2d93e49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ishizaka, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:54:18 compute-0 systemd[1]: libpod-conmon-ba2cfed1cb44f76a6d50edf17e04531ddd93767d8ba08b99c626f6ff2d93e49b.scope: Deactivated successfully.
Oct 11 04:54:18 compute-0 sudo[269543]: pam_unix(sudo:session): session closed for user root
Oct 11 04:54:18 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:54:18 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:54:18 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:54:18 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:54:18 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 0585b1e9-8409-4a28-b387-fb6ea7e2b386 does not exist
Oct 11 04:54:18 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 3c98b130-324c-4e1a-8afa-263d8954820e does not exist
Oct 11 04:54:18 compute-0 nova_compute[259400]: 2025-10-11 04:54:18.195 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:54:18 compute-0 sudo[269713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:54:18 compute-0 sudo[269713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:54:18 compute-0 sudo[269713]: pam_unix(sudo:session): session closed for user root
Oct 11 04:54:18 compute-0 sudo[269738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:54:18 compute-0 sudo[269738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:54:18 compute-0 sudo[269738]: pam_unix(sudo:session): session closed for user root
Oct 11 04:54:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:19 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:54:19 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:54:19 compute-0 ceph-mon[74243]: pgmap v1002: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:19 compute-0 nova_compute[259400]: 2025-10-11 04:54:19.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:54:19 compute-0 nova_compute[259400]: 2025-10-11 04:54:19.197 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:54:19 compute-0 nova_compute[259400]: 2025-10-11 04:54:19.197 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:54:19 compute-0 nova_compute[259400]: 2025-10-11 04:54:19.198 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:54:19 compute-0 nova_compute[259400]: 2025-10-11 04:54:19.198 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:54:19 compute-0 nova_compute[259400]: 2025-10-11 04:54:19.231 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:54:19 compute-0 nova_compute[259400]: 2025-10-11 04:54:19.232 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:54:19 compute-0 nova_compute[259400]: 2025-10-11 04:54:19.232 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:54:19 compute-0 nova_compute[259400]: 2025-10-11 04:54:19.233 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:54:19 compute-0 nova_compute[259400]: 2025-10-11 04:54:19.233 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:54:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:54:19 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2519039048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:19 compute-0 nova_compute[259400]: 2025-10-11 04:54:19.719 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:54:19 compute-0 nova_compute[259400]: 2025-10-11 04:54:19.866 2 WARNING nova.virt.libvirt.driver [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:54:19 compute-0 nova_compute[259400]: 2025-10-11 04:54:19.867 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5088MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:54:19 compute-0 nova_compute[259400]: 2025-10-11 04:54:19.867 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:54:19 compute-0 nova_compute[259400]: 2025-10-11 04:54:19.867 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:54:19 compute-0 nova_compute[259400]: 2025-10-11 04:54:19.938 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:54:19 compute-0 nova_compute[259400]: 2025-10-11 04:54:19.939 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:54:19 compute-0 nova_compute[259400]: 2025-10-11 04:54:19.956 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:54:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:20 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2519039048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:54:20 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/603890329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:20 compute-0 nova_compute[259400]: 2025-10-11 04:54:20.414 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:54:20 compute-0 nova_compute[259400]: 2025-10-11 04:54:20.419 2 DEBUG nova.compute.provider_tree [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f05a244-23b6-4149-9b5a-a525e5860d18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:54:20 compute-0 nova_compute[259400]: 2025-10-11 04:54:20.440 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:54:20 compute-0 nova_compute[259400]: 2025-10-11 04:54:20.467 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:54:20 compute-0 nova_compute[259400]: 2025-10-11 04:54:20.468 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:54:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:21 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/603890329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:21 compute-0 ceph-mon[74243]: pgmap v1003: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:22 compute-0 nova_compute[259400]: 2025-10-11 04:54:22.464 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:54:22 compute-0 nova_compute[259400]: 2025-10-11 04:54:22.464 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:54:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:22 compute-0 ceph-mon[74243]: pgmap v1004: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:54:24.820604) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158464820666, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1537, "num_deletes": 266, "total_data_size": 2065146, "memory_usage": 2099616, "flush_reason": "Manual Compaction"}
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct 11 04:54:24 compute-0 ceph-mon[74243]: pgmap v1005: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158464834647, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2026545, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19617, "largest_seqno": 21153, "table_properties": {"data_size": 2019251, "index_size": 4305, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15899, "raw_average_key_size": 21, "raw_value_size": 2004445, "raw_average_value_size": 2651, "num_data_blocks": 191, "num_entries": 756, "num_filter_entries": 756, "num_deletions": 266, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760158365, "oldest_key_time": 1760158365, "file_creation_time": 1760158464, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 14110 microseconds, and 8765 cpu microseconds.
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:54:24.834718) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2026545 bytes OK
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:54:24.834750) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:54:24.836784) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:54:24.836808) EVENT_LOG_v1 {"time_micros": 1760158464836800, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:54:24.836834) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2058141, prev total WAL file size 2059298, number of live WAL files 2.
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:54:24.837946) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1979KB)], [47(7181KB)]
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158464838009, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9380453, "oldest_snapshot_seqno": -1}
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4389 keys, 7614982 bytes, temperature: kUnknown
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158464886639, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7614982, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7583881, "index_size": 19023, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11013, "raw_key_size": 108711, "raw_average_key_size": 24, "raw_value_size": 7502658, "raw_average_value_size": 1709, "num_data_blocks": 793, "num_entries": 4389, "num_filter_entries": 4389, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156706, "oldest_key_time": 0, "file_creation_time": 1760158464, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:54:24.886964) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7614982 bytes
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:54:24.888314) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.5 rd, 156.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.0 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(8.4) write-amplify(3.8) OK, records in: 4923, records dropped: 534 output_compression: NoCompression
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:54:24.888380) EVENT_LOG_v1 {"time_micros": 1760158464888363, "job": 24, "event": "compaction_finished", "compaction_time_micros": 48720, "compaction_time_cpu_micros": 32135, "output_level": 6, "num_output_files": 1, "total_output_size": 7614982, "num_input_records": 4923, "num_output_records": 4389, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158464889097, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158464891605, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:54:24.837815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:54:24.891697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:54:24.891704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:54:24.891707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:54:24.891710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:54:24 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:54:24.891712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:54:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:54:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:54:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:54:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:54:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:54:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:54:26 compute-0 podman[269809]: 2025-10-11 04:54:26.458745915 +0000 UTC m=+0.102875180 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:54:26 compute-0 podman[269810]: 2025-10-11 04:54:26.493882416 +0000 UTC m=+0.131682093 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:54:26 compute-0 podman[269808]: 2025-10-11 04:54:26.506177114 +0000 UTC m=+0.150170486 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 04:54:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:26 compute-0 ceph-mon[74243]: pgmap v1006: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:28 compute-0 ceph-mon[74243]: pgmap v1007: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:30 compute-0 ceph-mon[74243]: pgmap v1008: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:32 compute-0 ceph-mon[74243]: pgmap v1009: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:34 compute-0 ceph-mon[74243]: pgmap v1010: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:35 compute-0 podman[269874]: 2025-10-11 04:54:35.426870113 +0000 UTC m=+0.078762025 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent)
Oct 11 04:54:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:36 compute-0 ceph-mon[74243]: pgmap v1011: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:38 compute-0 ceph-mon[74243]: pgmap v1012: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:40 compute-0 ceph-mon[74243]: pgmap v1013: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:42 compute-0 ceph-mon[74243]: pgmap v1014: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:44 compute-0 ceph-mon[74243]: pgmap v1015: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:46 compute-0 ceph-mon[74243]: pgmap v1016: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:48 compute-0 ceph-mon[74243]: pgmap v1017: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:50 compute-0 sshd-session[269893]: Accepted publickey for zuul from 192.168.122.10 port 47194 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:54:50 compute-0 systemd-logind[801]: New session 54 of user zuul.
Oct 11 04:54:50 compute-0 systemd[1]: Started Session 54 of User zuul.
Oct 11 04:54:50 compute-0 sshd-session[269893]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:54:50 compute-0 sudo[269897]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 11 04:54:50 compute-0 sudo[269897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:54:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:50 compute-0 ceph-mon[74243]: pgmap v1018: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:52 compute-0 ceph-mon[74243]: pgmap v1019: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:53 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14741 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:54:53 compute-0 ceph-mon[74243]: from='client.14741 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:54:53 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14743 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:54:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 11 04:54:54 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/571924809' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 04:54:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:54 compute-0 ceph-mon[74243]: from='client.14743 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:54:54 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/571924809' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 04:54:54 compute-0 ceph-mon[74243]: pgmap v1020: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:54:56
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', '.rgw.root', '.mgr', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.log']
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:54:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:56 compute-0 ceph-mon[74243]: pgmap v1021: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:57 compute-0 podman[270198]: 2025-10-11 04:54:57.422849474 +0000 UTC m=+0.059428681 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:54:57 compute-0 podman[270196]: 2025-10-11 04:54:57.448114058 +0000 UTC m=+0.087098845 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:54:57 compute-0 podman[270199]: 2025-10-11 04:54:57.46855226 +0000 UTC m=+0.092861949 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 11 04:54:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:58 compute-0 ceph-mon[74243]: pgmap v1022: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:54:59 compute-0 ovs-vsctl[270286]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 11 04:55:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:00 compute-0 virtqemud[259122]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 11 04:55:00 compute-0 virtqemud[259122]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 11 04:55:00 compute-0 virtqemud[259122]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 11 04:55:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:00 compute-0 ceph-mon[74243]: pgmap v1023: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:01 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: cache status {prefix=cache status} (starting...)
Oct 11 04:55:01 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: client ls {prefix=client ls} (starting...)
Oct 11 04:55:01 compute-0 lvm[270617]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 04:55:01 compute-0 lvm[270617]: VG ceph_vg2 finished
Oct 11 04:55:01 compute-0 lvm[270638]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 11 04:55:01 compute-0 lvm[270638]: VG ceph_vg1 finished
Oct 11 04:55:01 compute-0 lvm[270643]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 11 04:55:01 compute-0 lvm[270643]: VG ceph_vg0 finished
Oct 11 04:55:01 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14747 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:01 compute-0 kernel: block dm-1: the capability attribute has been deprecated.
Oct 11 04:55:02 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: damage ls {prefix=damage ls} (starting...)
Oct 11 04:55:02 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: dump loads {prefix=dump loads} (starting...)
Oct 11 04:55:02 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:55:02 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14749 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:02 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 11 04:55:02 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 11 04:55:02 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 11 04:55:02 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 11 04:55:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 11 04:55:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1450776673' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 11 04:55:02 compute-0 ceph-mon[74243]: from='client.14747 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:02 compute-0 ceph-mon[74243]: from='client.14749 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:02 compute-0 ceph-mon[74243]: pgmap v1024: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:02 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1450776673' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 11 04:55:02 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 11 04:55:02 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14755 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:02 compute-0 ceph-mgr[74542]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 04:55:02 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:55:02.963+0000 7fe5f0a93640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 04:55:02 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 11 04:55:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:55:03 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2771984072' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:55:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:55:03 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2771984072' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:55:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:55:03 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/879463095' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:55:03 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: ops {prefix=ops} (starting...)
Oct 11 04:55:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 11 04:55:03 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2268733127' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 11 04:55:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 11 04:55:03 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2237646423' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 11 04:55:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 11 04:55:03 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2063046435' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 11 04:55:03 compute-0 ceph-mon[74243]: from='client.14755 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/2771984072' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:55:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/2771984072' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:55:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/879463095' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:55:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2268733127' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 11 04:55:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2237646423' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 11 04:55:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2063046435' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 11 04:55:03 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: session ls {prefix=session ls} (starting...)
Oct 11 04:55:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 11 04:55:04 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2593830543' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 11 04:55:04 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: status {prefix=status} (starting...)
Oct 11 04:55:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 11 04:55:04 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4246828457' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 11 04:55:04 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14773 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 11 04:55:04 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1011573651' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 11 04:55:04 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14777 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:04 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2593830543' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 11 04:55:04 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4246828457' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 11 04:55:04 compute-0 ceph-mon[74243]: from='client.14773 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:04 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1011573651' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 11 04:55:04 compute-0 ceph-mon[74243]: from='client.14777 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:04 compute-0 ceph-mon[74243]: pgmap v1025: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 11 04:55:04 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3350478586' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 04:55:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 11 04:55:05 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315343317' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 11 04:55:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 11 04:55:05 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583149046' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 11 04:55:05 compute-0 podman[271187]: 2025-10-11 04:55:05.557270122 +0000 UTC m=+0.087264779 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:55:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 11 04:55:05 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1561121604' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 11 04:55:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 11 04:55:05 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2451453898' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 11 04:55:05 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3350478586' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 04:55:05 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/315343317' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 11 04:55:05 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3583149046' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 11 04:55:05 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1561121604' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 11 04:55:05 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2451453898' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14789 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 11 04:55:06 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:55:06.009+0000 7fe5f0a93640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14791 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:55:06 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 11 04:55:06 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3448803527' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14795 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:06 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 11 04:55:06 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3805104131' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 11 04:55:06 compute-0 ceph-mon[74243]: from='client.14789 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:06 compute-0 ceph-mon[74243]: from='client.14791 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:06 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3448803527' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 11 04:55:06 compute-0 ceph-mon[74243]: from='client.14795 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:06 compute-0 ceph-mon[74243]: pgmap v1026: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:06 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3805104131' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 11 04:55:06 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14799 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 11 04:55:07 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3709712353' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 11 04:55:07 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14803 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.7( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[6.8( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=0 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000051 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[6.8( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=0 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[6.8( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000033
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[6.8( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[6.8( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[6.8( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[6.8( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[6.8( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[6.8( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[6.8( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[6.8( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000150 1 0.000069
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[6.8( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.8(unlocked)] enter Initial
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=0 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000063 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.f( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.016837 6 0.000052
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.f( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.f( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=0 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000152 1 0.000178
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000090 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000204 1 0.000323
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000063 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.18(unlocked)] enter Initial
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000366 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=0 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000058 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=0 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000080
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000191 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000113 1 0.000247
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000039 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000189 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[6.8( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001942 2 0.000148
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[6.8( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[6.8( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000024 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[6.8( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.6( v 41'581 lc 40'85 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.014628 3 0.000122
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.6( v 41'581 lc 40'85 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.6( v 41'581 lc 40'85 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000077 1 0.000058
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.6( v 41'581 lc 40'85 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.f( v 41'581 lc 40'54 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.023547 3 0.000812
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.f( v 41'581 lc 40'54 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.061076 1 0.000042
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.16( v 41'581 lc 40'103 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.075567 3 0.000288
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.16( v 41'581 lc 40'103 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.f( v 41'581 lc 40'54 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.049796 1 0.000071
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.f( v 41'581 lc 40'54 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.057610 1 0.000085
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.7( v 41'581 lc 39'43 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.131979 3 0.000181
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.7( v 41'581 lc 39'43 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.16( v 41'581 lc 40'103 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.058654 1 0.000059
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.16( v 41'581 lc 40'103 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.045917 1 0.000123
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.e( v 41'581 lc 40'72 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.180555 3 0.000299
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.e( v 41'581 lc 40'72 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.7( v 41'581 lc 39'43 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.046341 1 0.000107
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.7( v 41'581 lc 39'43 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.051121 1 0.000039
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.e( v 41'581 lc 40'72 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.051510 1 0.000043
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.e( v 41'581 lc 40'72 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.1f( v 41'581 lc 40'113 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.230206 3 0.000208
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.1f( v 41'581 lc 40'113 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052527 1 0.000046
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.1e( v 41'581 lc 40'232 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.283726 3 0.000100
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.1e( v 41'581 lc 40'232 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.1f( v 41'581 lc 40'113 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.052490 1 0.000075
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.1f( v 41'581 lc 40'113 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.049597 1 0.000100
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.17( v 41'581 lc 39'41 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.332544 3 0.000246
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.17( v 41'581 lc 39'41 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.1e( v 41'581 lc 40'232 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.049875 1 0.000076
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.1e( v 41'581 lc 40'232 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 1441792 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.051821 1 0.000045
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.17( v 41'581 lc 39'41 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.052046 1 0.000033
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.17( v 41'581 lc 39'41 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.033486 1 0.000117
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 66 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:01.329302+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 66 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.017434 2 0.000089
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.634969 1 0.000070
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.017676 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.020528 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.945352 1 0.000319
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started 2.034748 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.017897 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.021513 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started 2.034552 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000111 1 0.000179
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000013 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.841154 1 0.000060
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.021452 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started 2.036151 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.737076 1 0.000145
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.021847 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started 2.036181 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Reset 0.001284 1 0.001395
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000054 1 0.000081
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.019503 2 0.000226
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.019981 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.020128 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000104 1 0.000122
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000011 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Reset 0.001728 1 0.001759
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Reset 0.002097 1 0.002147
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Start 0.000017 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000043 1 0.000071
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000060 1 0.000106
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.604596 1 0.000085
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.022863 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started 2.039123 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Reset 0.003356 1 0.003392
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.891305 1 0.000101
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000044 1 0.000105
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.022416 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started 2.039285 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Reset 0.000064 1 0.000099
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.793598 1 0.000065
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.023177 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started 2.039143 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[6.8( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.020873 2 0.000176
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[6.8( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.023079 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[6.8( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=66/67 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Reset 0.000055 1 0.000084
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Reset 0.000800 1 0.000845
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.691300 1 0.000057
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.023744 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started 2.039086 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[57,65)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Reset 0.000093 1 0.000132
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000483 2 0.000090
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=17
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=17
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002716 3 0.000066
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=16
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=16
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002806 3 0.000085
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001196 2 0.000779
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=12
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=12
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003917 3 0.000066
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001516 2 0.000067
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000972 2 0.000083
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=21
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=21
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001794 3 0.000057
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=12
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=12
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000955 2 0.000200
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000754 2 0.000063
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=17
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=17
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002142 2 0.000062
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000041 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=19
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=19
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002384 2 0.000085
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=66/67 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=66/67 n=1 ec=47/21 lis/c=66/47 les/c/f=67/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007742 4 0.000158
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=66/67 n=1 ec=47/21 lis/c=66/47 les/c/f=67/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=66/67 n=1 ec=47/21 lis/c=66/47 les/c/f=67/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000018 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 67 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=66/67 n=1 ec=47/21 lis/c=66/47 les/c/f=67/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 529307 data_alloc: 218103808 data_used: 49152
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 1449984 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:02.329484+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.828875542s of 10.166488647s, submitted: 121
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 67 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005632 2 0.000120
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007556 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006547 2 0.000070
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009379 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005908 2 0.000073
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007498 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005846 2 0.000077
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008197 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004680 2 0.000066
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008353 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007067 2 0.000069
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011143 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005283 2 0.000118
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008510 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.8( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.011685 6 0.000102
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.8( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.8( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007864 2 0.000083
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.18( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.013417 6 0.000081
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011364 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.18( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.18( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005626 4 0.000181
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/57 les/c/f=68/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005237 3 0.000089
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/57 les/c/f=68/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/57 les/c/f=68/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/57 les/c/f=68/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005873 4 0.000259
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/57 les/c/f=68/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010021 3 0.000142
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/57 les/c/f=68/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/57 les/c/f=68/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/57 les/c/f=68/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/57 les/c/f=68/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009659 3 0.000186
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/57 les/c/f=68/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/57 les/c/f=68/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/57 les/c/f=68/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/57 les/c/f=68/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009234 3 0.000081
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/57 les/c/f=68/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/57 les/c/f=68/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/57 les/c/f=68/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008640 4 0.000936
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010831 4 0.001061
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.18( v 41'581 lc 39'36 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.010506 3 0.000686
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.18( v 41'581 lc 39'36 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.18( v 41'581 lc 39'36 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000111 1 0.000053
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.18( v 41'581 lc 39'36 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.044144 1 0.000145
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.8( v 41'581 lc 40'74 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.055773 3 0.000168
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.8( v 41'581 lc 40'74 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.8( v 41'581 lc 40'74 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000135 1 0.000110
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.8( v 41'581 lc 40'74 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.076511 1 0.000058
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 68 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 1277952 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 68 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 68 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:03.329626+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 68 handle_osd_map epochs [69,69], i have 69, src has [1,69]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.971307 1 0.000055
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.026400 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started 2.039943 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Reset 0.000121 1 0.000161
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000047
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.894942 1 0.000058
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.027614 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started 2.039723 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Reset 0.000263 1 0.001228
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=17
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=17
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001558 3 0.000049
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Start 0.000162 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000086 1 0.000482
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=27
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=27
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000505 3 0.000124
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 1269760 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:04.329781+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 70 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012596 2 0.000047
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 70 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013322 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 70 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 70 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=69/70 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 70 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.015021 2 0.000073
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 70 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.016688 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 70 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 70 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=69/70 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 70 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=69/70 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 70 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=69/70 n=7 ec=49/34 lis/c=69/49 les/c/f=70/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005014 3 0.000166
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 70 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=69/70 n=7 ec=49/34 lis/c=69/49 les/c/f=70/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 70 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=69/70 n=7 ec=49/34 lis/c=69/49 les/c/f=70/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 70 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=69/70 n=7 ec=49/34 lis/c=69/49 les/c/f=70/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 70 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=69/70 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 70 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=69/70 n=6 ec=49/34 lis/c=69/49 les/c/f=70/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005998 3 0.000214
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 70 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=69/70 n=6 ec=49/34 lis/c=69/49 les/c/f=70/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 70 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=69/70 n=6 ec=49/34 lis/c=69/49 les/c/f=70/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 70 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=69/70 n=6 ec=49/34 lis/c=69/49 les/c/f=70/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 1245184 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 70 handle_osd_map epochs [70,70], i have 70, src has [1,70]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 70 heartbeat osd_stat(store_statfs(0x4fe102000/0x0/0x4ffc00000, data 0x60204/0xc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:05.329955+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.b deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.b deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 1236992 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:06.330139+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:28:35.355405+0000 osd.2 (osd.2) 32 : cluster [DBG] 5.b deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:28:35.369480+0000 osd.2 (osd.2) 33 : cluster [DBG] 5.b deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.d scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.d scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 33) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:28:35.355405+0000 osd.2 (osd.2) 32 : cluster [DBG] 5.b deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:28:35.369480+0000 osd.2 (osd.2) 33 : cluster [DBG] 5.b deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561885 data_alloc: 218103808 data_used: 49152
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 1228800 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 70 heartbeat osd_stat(store_statfs(0x4fe101000/0x0/0x4ffc00000, data 0x61c37/0xcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:07.330384+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:28:36.367645+0000 osd.2 (osd.2) 34 : cluster [DBG] 5.d scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:28:36.381744+0000 osd.2 (osd.2) 35 : cluster [DBG] 5.d scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 35) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:28:36.367645+0000 osd.2 (osd.2) 34 : cluster [DBG] 5.d scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:28:36.381744+0000 osd.2 (osd.2) 35 : cluster [DBG] 5.d scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 1228800 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:08.330608+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1105920 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:09.330805+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.e scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.e scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f807800
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 71 ms_handle_reset con 0x56328f807800 session 0x56328e6ccd20
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 368640 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f51f000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:10.331000+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:28:39.334850+0000 osd.2 (osd.2) 36 : cluster [DBG] 5.e scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:28:39.348925+0000 osd.2 (osd.2) 37 : cluster [DBG] 5.e scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 37) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:28:39.334850+0000 osd.2 (osd.2) 36 : cluster [DBG] 5.e scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:28:39.348925+0000 osd.2 (osd.2) 37 : cluster [DBG] 5.e scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 71 ms_handle_reset con 0x56328f51f000 session 0x56328ec57680
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 294912 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:11.331196+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 71 handle_osd_map epochs [72,73], i have 71, src has [1,73]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 574954 data_alloc: 218103808 data_used: 65536
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 286720 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fe0fe000/0x0/0x4ffc00000, data 0x63abb/0xd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:12.331370+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 286720 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:13.331493+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 270336 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x6720e/0xd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:14.331675+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.796347618s of 12.024861336s, submitted: 82
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 245760 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:15.331864+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:28:44.395660+0000 osd.2 (osd.2) 38 : cluster [DBG] 5.10 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:28:44.409807+0000 osd.2 (osd.2) 39 : cluster [DBG] 5.10 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 39) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:28:44.395660+0000 osd.2 (osd.2) 38 : cluster [DBG] 5.10 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:28:44.409807+0000 osd.2 (osd.2) 39 : cluster [DBG] 5.10 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 245760 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:16.332087+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:28:45.428402+0000 osd.2 (osd.2) 40 : cluster [DBG] 5.17 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:28:45.442443+0000 osd.2 (osd.2) 41 : cluster [DBG] 5.17 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 41) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:28:45.428402+0000 osd.2 (osd.2) 40 : cluster [DBG] 5.17 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:28:45.442443+0000 osd.2 (osd.2) 41 : cluster [DBG] 5.17 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 575666 data_alloc: 218103808 data_used: 65536
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 229376 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:17.332408+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 229376 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:18.332566+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 212992 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 74 heartbeat osd_stat(store_statfs(0x4fe0f8000/0x0/0x4ffc00000, data 0x6720e/0xd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:19.332708+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:28:48.508062+0000 osd.2 (osd.2) 42 : cluster [DBG] 5.1b scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:28:48.522187+0000 osd.2 (osd.2) 43 : cluster [DBG] 5.1b scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 43) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:28:48.508062+0000 osd.2 (osd.2) 42 : cluster [DBG] 5.1b scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:28:48.522187+0000 osd.2 (osd.2) 43 : cluster [DBG] 5.1b scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 172032 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:20.332976+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 155648 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.1c(unlocked)] enter Initial
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=0 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000121 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=0 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000039
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000196 1 0.000069
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000046 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000265 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.c(unlocked)] enter Initial
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=0 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=0 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000024
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000152 1 0.000070
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000047 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000237 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:21.333173+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 76 handle_osd_map epochs [77,77], i have 77, src has [1,77]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.499671 2 0.000088
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.500029 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.500070 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.499336 2 0.000094
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.499674 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.499712 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000161 1 0.000270
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000108 1 0.000226
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000017 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000329 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 596226 data_alloc: 218103808 data_used: 73728
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 131072 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:22.333421+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 78 pg[9.1c( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.005840 6 0.000395
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 78 pg[9.1c( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 78 pg[9.1c( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 78 pg[9.c( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.006226 6 0.000088
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 78 pg[9.c( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 78 pg[9.c( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: not registered w/ OSD
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: not registered w/ OSD
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 78 pg[9.1c( v 41'581 lc 40'221 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.009666 3 0.000122
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 78 pg[9.1c( v 41'581 lc 40'221 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 78 pg[9.1c( v 41'581 lc 40'221 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000104 1 0.000065
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 78 pg[9.1c( v 41'581 lc 40'221 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 78 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.056764 1 0.000053
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 78 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 78 pg[9.c( v 41'581 lc 40'119 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.066752 3 0.000106
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 78 pg[9.c( v 41'581 lc 40'119 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 78 pg[9.c( v 41'581 lc 40'119 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000363 1 0.000101
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 78 pg[9.c( v 41'581 lc 40'119 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 78 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.045081 1 0.000051
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 78 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 1228800 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 78 heartbeat osd_stat(store_statfs(0x4fe0e9000/0x0/0x4ffc00000, data 0x6e108/0xe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:23.333602+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:28:52.501070+0000 osd.2 (osd.2) 44 : cluster [DBG] 5.1c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:28:52.515170+0000 osd.2 (osd.2) 45 : cluster [DBG] 5.1c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 45) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:28:52.501070+0000 osd.2 (osd.2) 44 : cluster [DBG] 5.1c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:28:52.515170+0000 osd.2 (osd.2) 45 : cluster [DBG] 5.1c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.964495 1 0.000047
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.031150 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started 2.037372 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.918811 1 0.000042
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.031114 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started 2.037398 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[49,77)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Reset 0.000074 1 0.000121
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Reset 0.000075 1 0.000111
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000048
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000032 1 0.000037
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001452 3 0.000051
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=10
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=10
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001459 3 0.000040
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 1146880 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:24.333829+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 79 handle_osd_map epochs [79,80], i have 79, src has [1,80]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.138944626s of 10.295907974s, submitted: 38
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 80 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.015681 2 0.000096
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 80 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.017281 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 80 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 80 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 80 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.016458 2 0.000045
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 80 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.018007 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 80 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 80 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 80 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 80 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=7 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005246 3 0.000198
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 80 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=7 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 80 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=7 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 80 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=7 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 80 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 80 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007778 3 0.000180
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 80 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 80 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 80 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=79) [2] r=0 lpr=79 pi=[49,79)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 1122304 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 80 handle_osd_map epochs [80,80], i have 80, src has [1,80]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 80 handle_osd_map epochs [80,80], i have 80, src has [1,80]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:25.334008+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 1122304 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 80 heartbeat osd_stat(store_statfs(0x4fe0df000/0x0/0x4ffc00000, data 0x73645/0xec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:26.334179+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f51f400
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 625283 data_alloc: 218103808 data_used: 86016
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 81 pg[6.f(unlocked)] enter Initial
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=0 pi=[57,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000079 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=0 pi=[57,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000042
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000029 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000232 1 0.000145
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 81 pg[6.f( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.005287 2 0.000084
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 81 pg[6.f( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 81 pg[6.f( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 81 pg[6.f( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 1081344 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:27.334349+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 81 handle_osd_map epochs [82,82], i have 82, src has [1,82]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 82 pg[6.f( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996007 2 0.000076
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 82 pg[6.f( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.001606 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 82 pg[6.f( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 82 pg[6.f( v 38'39 lc 34'1 (0'0,38'39] local-lis/les=81/82 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=38'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 82 pg[6.f( v 38'39 lc 34'1 (0'0,38'39] local-lis/les=81/82 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 82 pg[6.f( v 38'39 lc 34'1 (0'0,38'39] local-lis/les=81/82 n=1 ec=47/21 lis/c=81/57 les/c/f=82/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.003320 4 0.000164
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 82 pg[6.f( v 38'39 lc 34'1 (0'0,38'39] local-lis/les=81/82 n=1 ec=47/21 lis/c=81/57 les/c/f=82/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 82 pg[6.f( v 38'39 lc 34'1 (0'0,38'39] local-lis/les=81/82 n=1 ec=47/21 lis/c=81/57 les/c/f=82/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000078 1 0.000099
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 82 pg[6.f( v 38'39 lc 34'1 (0'0,38'39] local-lis/les=81/82 n=1 ec=47/21 lis/c=81/57 les/c/f=82/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 82 pg[6.f( v 38'39 lc 34'1 (0'0,38'39] local-lis/les=81/82 n=1 ec=47/21 lis/c=81/57 les/c/f=82/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000006 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 82 pg[6.f( v 38'39 lc 34'1 (0'0,38'39] local-lis/les=81/82 n=1 ec=47/21 lis/c=81/57 les/c/f=82/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 1024000 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 82 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=81/82 n=1 ec=47/21 lis/c=81/57 les/c/f=82/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 38'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.126562 2 0.000059
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 82 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=81/82 n=1 ec=47/21 lis/c=81/57 les/c/f=82/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 38'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 82 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=81/82 n=1 ec=47/21 lis/c=81/57 les/c/f=82/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 38'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 82 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=81/82 n=1 ec=47/21 lis/c=81/57 les/c/f=82/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 38'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:28.334479+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 925696 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 83 heartbeat osd_stat(store_statfs(0x4fe0da000/0x0/0x4ffc00000, data 0x76fec/0xf4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:29.334625+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:28:58.451018+0000 osd.2 (osd.2) 46 : cluster [DBG] 5.1f scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:28:58.465136+0000 osd.2 (osd.2) 47 : cluster [DBG] 5.1f scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 47) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:28:58.451018+0000 osd.2 (osd.2) 46 : cluster [DBG] 5.1f scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:28:58.465136+0000 osd.2 (osd.2) 47 : cluster [DBG] 5.1f scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 925696 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:30.334805+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:28:59.443552+0000 osd.2 (osd.2) 48 : cluster [DBG] 10.3 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:28:59.461155+0000 osd.2 (osd.2) 49 : cluster [DBG] 10.3 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 49) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:28:59.443552+0000 osd.2 (osd.2) 48 : cluster [DBG] 10.3 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:28:59.461155+0000 osd.2 (osd.2) 49 : cluster [DBG] 10.3 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 917504 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 83 heartbeat osd_stat(store_statfs(0x4fe0d6000/0x0/0x4ffc00000, data 0x78b69/0xf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:31.334994+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 645317 data_alloc: 218103808 data_used: 94208
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 917504 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:32.335133+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 925696 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:33.335307+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:02.439312+0000 osd.2 (osd.2) 50 : cluster [DBG] 10.5 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:02.453385+0000 osd.2 (osd.2) 51 : cluster [DBG] 10.5 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 876544 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 51) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:02.439312+0000 osd.2 (osd.2) 50 : cluster [DBG] 10.5 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:02.453385+0000 osd.2 (osd.2) 51 : cluster [DBG] 10.5 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:34.335619+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:03.454320+0000 osd.2 (osd.2) 52 : cluster [DBG] 10.a scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:03.468408+0000 osd.2 (osd.2) 53 : cluster [DBG] 10.a scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 868352 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 53) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:03.454320+0000 osd.2 (osd.2) 52 : cluster [DBG] 10.a scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:03.468408+0000 osd.2 (osd.2) 53 : cluster [DBG] 10.a scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.051004410s of 10.169565201s, submitted: 29
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:35.336036+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 860160 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:36.336779+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 86 heartbeat osd_stat(store_statfs(0x4fe0cd000/0x0/0x4ffc00000, data 0x7dde0/0x100000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 86 pg[9.13(unlocked)] enter Initial
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=0 lpr=0 pi=[57,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000140 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=0 lpr=0 pi=[57,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000030 1 0.000064
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000023 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000229 1 0.000116
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000109 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000372 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 655195 data_alloc: 218103808 data_used: 98304
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 843776 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 86 handle_osd_map epochs [87,87], i have 87, src has [1,87]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.441159 2 0.000165
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.441572 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.441629 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000076 1 0.000115
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000007 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:37.337032+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:06.481841+0000 osd.2 (osd.2) 54 : cluster [DBG] 10.c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:06.495991+0000 osd.2 (osd.2) 55 : cluster [DBG] 10.c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 802816 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 88 pg[9.13( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 0.999261 6 0.000043
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 88 pg[9.13( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 88 pg[9.13( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 88 pg[9.13( v 41'581 lc 40'184 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.004464 3 0.000130
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 88 pg[9.13( v 41'581 lc 40'184 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 88 pg[9.13( v 41'581 lc 40'184 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000081 1 0.000063
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 88 pg[9.13( v 41'581 lc 40'184 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 55) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:06.481841+0000 osd.2 (osd.2) 54 : cluster [DBG] 10.c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:06.495991+0000 osd.2 (osd.2) 55 : cluster [DBG] 10.c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 88 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.043229 1 0.000050
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 88 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:38.337247+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 88 heartbeat osd_stat(store_statfs(0x4fe0c5000/0x0/0x4ffc00000, data 0x81451/0x107000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 917504 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.996813 1 0.000079
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.044731 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started 2.044039 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[57,87)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Reset 0.000177 1 0.000231
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Start 0.000018 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000063 1 0.000100
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001472 3 0.000079
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:39.337414+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 89 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 90 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.887945 2 0.000101
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 90 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.889624 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 90 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 90 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=89/90 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 90 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=89/90 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 90 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=89/90 n=6 ec=49/34 lis/c=89/57 les/c/f=90/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009494 4 0.000311
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 90 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=89/90 n=6 ec=49/34 lis/c=89/57 les/c/f=90/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 90 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=89/90 n=6 ec=49/34 lis/c=89/57 les/c/f=90/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000021 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 90 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=89/90 n=6 ec=49/34 lis/c=89/57 les/c/f=90/58/0 sis=89) [2] r=0 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 901120 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:40.337545+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 892928 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 90 heartbeat osd_stat(store_statfs(0x4fe0bd000/0x0/0x4ffc00000, data 0x84a58/0x10d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 91 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=41'581 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 38.650301 69 0.000231
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 91 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active 38.656045 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 91 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary 39.663641 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 91 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=41'581 mlcod 0'0 active mbc={}] exit Started 39.663686 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 91 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=41'581 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 91 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91 pruub=9.350275040s) [0] r=-1 lpr=91 pi=[67,91)/1 crt=41'581 mlcod 0'0 active pruub 139.084411621s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 91 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91 pruub=9.349936485s) [0] r=-1 lpr=91 pi=[67,91)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 139.084411621s@ mbc={}] exit Reset 0.000382 1 0.000485
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 91 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91 pruub=9.349936485s) [0] r=-1 lpr=91 pi=[67,91)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 139.084411621s@ mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 91 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91 pruub=9.349936485s) [0] r=-1 lpr=91 pi=[67,91)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 139.084411621s@ mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 91 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91 pruub=9.349936485s) [0] r=-1 lpr=91 pi=[67,91)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 139.084411621s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 91 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91 pruub=9.349936485s) [0] r=-1 lpr=91 pi=[67,91)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 139.084411621s@ mbc={}] exit Start 0.000163 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 91 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91 pruub=9.349936485s) [0] r=-1 lpr=91 pi=[67,91)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 139.084411621s@ mbc={}] enter Started/Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:41.337666+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:10.566566+0000 osd.2 (osd.2) 56 : cluster [DBG] 10.18 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:10.580956+0000 osd.2 (osd.2) 57 : cluster [DBG] 10.18 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 679401 data_alloc: 218103808 data_used: 110592
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 884736 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 92 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=-1 lpr=91 pi=[67,91)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.904436 3 0.000379
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 92 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=-1 lpr=91 pi=[67,91)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.904725 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 92 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=-1 lpr=91 pi=[67,91)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 92 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 92 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Reset 0.000159 1 0.000210
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 92 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 92 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 92 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 92 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Start 0.000017 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 92 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 92 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 92 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 57) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:10.566566+0000 osd.2 (osd.2) 56 : cluster [DBG] 10.18 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:10.580956+0000 osd.2 (osd.2) 57 : cluster [DBG] 10.18 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 92 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004818 2 0.000093
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 92 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 92 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 92 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000385 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 92 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 92 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000087 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 92 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:42.337967+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 876544 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 92 handle_osd_map epochs [92,93], i have 93, src has [1,93]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 93 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997752 3 0.000795
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 93 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003299 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 93 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 93 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 93 handle_osd_map epochs [93,93], i have 93, src has [1,93]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 93 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 93 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.155077 5 0.000329
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 93 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 93 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000163 1 0.000336
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 93 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 93 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.001874 1 0.000062
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 93 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 93 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.093304 2 0.000094
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 93 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:43.338154+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:12.636403+0000 osd.2 (osd.2) 58 : cluster [DBG] 10.1b scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:12.650492+0000 osd.2 (osd.2) 59 : cluster [DBG] 10.1b scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 843776 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 93 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.785018 1 0.000289
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active 1.036049 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary 2.039382 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started 2.039488 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94 pruub=15.118905067s) [0] async=[0] r=-1 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 41'581 active pruub 147.797927856s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94 pruub=15.118811607s) [0] r=-1 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 147.797927856s@ mbc={}] exit Reset 0.000142 1 0.000287
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94 pruub=15.118811607s) [0] r=-1 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 147.797927856s@ mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94 pruub=15.118811607s) [0] r=-1 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 147.797927856s@ mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94 pruub=15.118811607s) [0] r=-1 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 147.797927856s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94 pruub=15.118811607s) [0] r=-1 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 147.797927856s@ mbc={}] exit Start 0.000014 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94 pruub=15.118811607s) [0] r=-1 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 147.797927856s@ mbc={}] enter Started/Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 59) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:12.636403+0000 osd.2 (osd.2) 58 : cluster [DBG] 10.1b scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:12.650492+0000 osd.2 (osd.2) 59 : cluster [DBG] 10.1b scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:44.338368+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:13.594836+0000 osd.2 (osd.2) 60 : cluster [DBG] 10.1c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:13.608192+0000 osd.2 (osd.2) 61 : cluster [DBG] 10.1c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 827392 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 61) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:13.594836+0000 osd.2 (osd.2) 60 : cluster [DBG] 10.1c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:13.608192+0000 osd.2 (osd.2) 61 : cluster [DBG] 10.1c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:45.338729+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:14.619564+0000 osd.2 (osd.2) 62 : cluster [DBG] 10.1d scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:14.633753+0000 osd.2 (osd.2) 63 : cluster [DBG] 10.1d scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.561585426s of 10.764840126s, submitted: 44
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 95 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=-1 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.651954 6 0.000151
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 95 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=-1 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 95 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=-1 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 95 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=-1 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001251 2 0.000042
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 95 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=-1 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: not registered w/ OSD
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 95 pg[9.16( v 41'581 (0'0,41'581] lb MIN local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=-1 lpr=94 DELETING pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.052404 2 0.000178
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 95 pg[9.16( v 41'581 (0'0,41'581] lb MIN local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=-1 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.053766 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 95 pg[9.16( v 41'581 (0'0,41'581] lb MIN local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=-1 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.705821 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: not registered w/ OSD
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 95 heartbeat osd_stat(store_statfs(0x4fcf13000/0x0/0x4ffc00000, data 0x8b50c/0x119000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 688128 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 63) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:14.619564+0000 osd.2 (osd.2) 62 : cluster [DBG] 10.1d scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:14.633753+0000 osd.2 (osd.2) 63 : cluster [DBG] 10.1d scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:46.338969+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 685573 data_alloc: 218103808 data_used: 110592
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 688128 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:47.339132+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 688128 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:48.339370+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:17.616040+0000 osd.2 (osd.2) 64 : cluster [DBG] 10.1f scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:17.633721+0000 osd.2 (osd.2) 65 : cluster [DBG] 10.1f scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 679936 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 65) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:17.616040+0000 osd.2 (osd.2) 64 : cluster [DBG] 10.1f scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:17.633721+0000 osd.2 (osd.2) 65 : cluster [DBG] 10.1f scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:49.339612+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 679936 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:50.339805+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:19.614440+0000 osd.2 (osd.2) 66 : cluster [DBG] 4.18 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:19.628532+0000 osd.2 (osd.2) 67 : cluster [DBG] 4.18 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.13 deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.13 deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 671744 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 97 heartbeat osd_stat(store_statfs(0x4fcf0c000/0x0/0x4ffc00000, data 0x905cb/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 67) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:19.614440+0000 osd.2 (osd.2) 66 : cluster [DBG] 4.18 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:19.628532+0000 osd.2 (osd.2) 67 : cluster [DBG] 4.18 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:51.340034+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:20.657774+0000 osd.2 (osd.2) 68 : cluster [DBG] 4.13 deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:20.671842+0000 osd.2 (osd.2) 69 : cluster [DBG] 4.13 deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695406 data_alloc: 218103808 data_used: 126976
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 1843200 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 69) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:20.657774+0000 osd.2 (osd.2) 68 : cluster [DBG] 4.13 deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:20.671842+0000 osd.2 (osd.2) 69 : cluster [DBG] 4.13 deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:52.340315+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:21.643563+0000 osd.2 (osd.2) 70 : cluster [DBG] 4.11 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:21.657700+0000 osd.2 (osd.2) 71 : cluster [DBG] 4.11 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 1843200 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 98 pg[9.19(unlocked)] enter Initial
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=0 lpr=0 pi=[57,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000075 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=0 lpr=0 pi=[57,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000042
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000181 1 0.000099
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000046 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000263 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 71) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:21.643563+0000 osd.2 (osd.2) 70 : cluster [DBG] 4.11 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:21.657700+0000 osd.2 (osd.2) 71 : cluster [DBG] 4.11 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 98 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.118285 2 0.000107
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.118597 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.118653 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000094 1 0.000146
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000010 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:53.340562+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 1826816 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:54.340790+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 100 pg[9.19( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=11 mbc={}] exit Started/Stray 1.370897 5 0.000145
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 100 pg[9.19( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=11 mbc={}] enter Started/ReplicaActive
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 100 pg[9.19( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=11 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 100 pg[9.19( v 41'581 lc 40'100 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=11 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.006377 4 0.000251
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 100 pg[9.19( v 41'581 lc 40'100 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=11 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 100 pg[9.19( v 41'581 lc 40'100 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=11 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000136 1 0.000133
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 100 pg[9.19( v 41'581 lc 40'100 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=11 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 100 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.094141 1 0.000078
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 100 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.130299 1 0.000230
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.231162 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started 1.602121 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[57,99)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Reset 0.000599 1 0.000665
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Start 0.000022 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002722 2 0.000083
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 101 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: merge_log_dups log.dups.size()=0olog.dups.size()=39
Oct 11 04:55:07 compute-0 ceph-osd[89565]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=39
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000521 2 0.000100
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 1818624 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:55.341047+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fcefc000/0x0/0x4ffc00000, data 0x97201/0x12e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.985859871s of 10.188960075s, submitted: 95
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 101 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 102 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005444 2 0.000150
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 102 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008818 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 102 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 102 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=101/102 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 102 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=101/102 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 102 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=101/102 n=6 ec=49/34 lis/c=101/57 les/c/f=102/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005550 4 0.000182
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 102 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=101/102 n=6 ec=49/34 lis/c=101/57 les/c/f=102/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 102 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=101/102 n=6 ec=49/34 lis/c=101/57 les/c/f=102/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 102 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=101/102 n=6 ec=49/34 lis/c=101/57 les/c/f=102/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 1851392 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 102 heartbeat osd_stat(store_statfs(0x4fcefc000/0x0/0x4ffc00000, data 0x97201/0x12e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:56.341426+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:25.565325+0000 osd.2 (osd.2) 72 : cluster [DBG] 4.1b scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:25.579390+0000 osd.2 (osd.2) 73 : cluster [DBG] 4.1b scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.a scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.a scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 730890 data_alloc: 218103808 data_used: 135168
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 73) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:25.565325+0000 osd.2 (osd.2) 72 : cluster [DBG] 4.1b scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:25.579390+0000 osd.2 (osd.2) 73 : cluster [DBG] 4.1b scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 1826816 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:57.341644+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:26.582605+0000 osd.2 (osd.2) 74 : cluster [DBG] 4.a scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:26.596707+0000 osd.2 (osd.2) 75 : cluster [DBG] 4.a scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.1c deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.1c deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 75) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:26.582605+0000 osd.2 (osd.2) 74 : cluster [DBG] 4.a scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:26.596707+0000 osd.2 (osd.2) 75 : cluster [DBG] 4.a scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 1802240 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:58.341834+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:27.574047+0000 osd.2 (osd.2) 76 : cluster [DBG] 4.1c deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:27.588151+0000 osd.2 (osd.2) 77 : cluster [DBG] 4.1c deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 77) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:27.574047+0000 osd.2 (osd.2) 76 : cluster [DBG] 4.1c deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:27.588151+0000 osd.2 (osd.2) 77 : cluster [DBG] 4.1c deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 1794048 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:59.342064+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fcef8000/0x0/0x4ffc00000, data 0x9a7b1/0x134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 104 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=79) [2] r=0 lpr=79 crt=41'581 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 35.032614 75 0.000217
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 104 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=79) [2] r=0 lpr=79 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active 35.040504 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 104 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=79) [2] r=0 lpr=79 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary 36.058537 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 104 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=79) [2] r=0 lpr=79 crt=41'581 mlcod 0'0 active mbc={}] exit Started 36.058570 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 104 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=79) [2] r=0 lpr=79 crt=41'581 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 104 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104 pruub=12.967244148s) [0] r=-1 lpr=104 pi=[79,104)/1 crt=41'581 mlcod 0'0 active pruub 161.408264160s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 104 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104 pruub=12.966794014s) [0] r=-1 lpr=104 pi=[79,104)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 161.408264160s@ mbc={}] exit Reset 0.000535 1 0.000812
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 104 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104 pruub=12.966794014s) [0] r=-1 lpr=104 pi=[79,104)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 161.408264160s@ mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 104 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104 pruub=12.966794014s) [0] r=-1 lpr=104 pi=[79,104)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 161.408264160s@ mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 104 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104 pruub=12.966794014s) [0] r=-1 lpr=104 pi=[79,104)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 161.408264160s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 104 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104 pruub=12.966794014s) [0] r=-1 lpr=104 pi=[79,104)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 161.408264160s@ mbc={}] exit Start 0.000199 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 104 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104 pruub=12.966794014s) [0] r=-1 lpr=104 pi=[79,104)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 161.408264160s@ mbc={}] enter Started/Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 105 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=-1 lpr=104 pi=[79,104)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.089269 3 0.000411
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 105 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=-1 lpr=104 pi=[79,104)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.089577 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 105 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=-1 lpr=104 pi=[79,104)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 105 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 105 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Reset 0.000097 1 0.000140
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 105 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 105 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 105 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 105 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Start 0.000012 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 105 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 105 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 105 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 105 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000066
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 105 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 105 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000035 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 105 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 105 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 105 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 1761280 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fcef8000/0x0/0x4ffc00000, data 0x9a7b1/0x134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:00.342263+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:29.557221+0000 osd.2 (osd.2) 78 : cluster [DBG] 4.1a scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:29.571251+0000 osd.2 (osd.2) 79 : cluster [DBG] 4.1a scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.e scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 4.e scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 105 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 106 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994472 4 0.000139
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 106 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.994671 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 106 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 106 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 106 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 106 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.005883 5 0.000370
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 106 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 106 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000214 1 0.000203
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 106 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 106 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000576 1 0.000061
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 106 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 79) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:29.557221+0000 osd.2 (osd.2) 78 : cluster [DBG] 4.1a scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:29.571251+0000 osd.2 (osd.2) 79 : cluster [DBG] 4.1a scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 106 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.063790 2 0.000081
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 106 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 688128 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:01.342582+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:30.599793+0000 osd.2 (osd.2) 80 : cluster [DBG] 4.e scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:30.613914+0000 osd.2 (osd.2) 81 : cluster [DBG] 4.e scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.1a deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.1a deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 747297 data_alloc: 218103808 data_used: 155648
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 106 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.993540 1 0.000156
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active 1.064372 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary 2.059081 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started 2.059143 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107 pruub=14.941355705s) [0] async=[0] r=-1 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 41'581 active pruub 165.531799316s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107 pruub=14.941193581s) [0] r=-1 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 165.531799316s@ mbc={}] exit Reset 0.000218 1 0.000335
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107 pruub=14.941193581s) [0] r=-1 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 165.531799316s@ mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107 pruub=14.941193581s) [0] r=-1 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 165.531799316s@ mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107 pruub=14.941193581s) [0] r=-1 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 165.531799316s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107 pruub=14.941193581s) [0] r=-1 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 165.531799316s@ mbc={}] exit Start 0.000049 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107 pruub=14.941193581s) [0] r=-1 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 165.531799316s@ mbc={}] enter Started/Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 81) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:30.599793+0000 osd.2 (osd.2) 80 : cluster [DBG] 4.e scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:30.613914+0000 osd.2 (osd.2) 81 : cluster [DBG] 4.e scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 663552 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:02.342869+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:31.587503+0000 osd.2 (osd.2) 82 : cluster [DBG] 7.1a deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:31.601493+0000 osd.2 (osd.2) 83 : cluster [DBG] 7.1a deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=41'581 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 60.504325 123 0.000322
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active 60.513125 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary 61.524677 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=41'581 mlcod 0'0 active mbc={}] exit Started 61.524729 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=41'581 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108 pruub=11.496529579s) [0] r=-1 lpr=108 pi=[67,108)/1 crt=41'581 mlcod 0'0 active pruub 163.090164185s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108 pruub=11.496438026s) [0] r=-1 lpr=108 pi=[67,108)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 163.090164185s@ mbc={}] exit Reset 0.000161 1 0.000235
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108 pruub=11.496438026s) [0] r=-1 lpr=108 pi=[67,108)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 163.090164185s@ mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108 pruub=11.496438026s) [0] r=-1 lpr=108 pi=[67,108)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 163.090164185s@ mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108 pruub=11.496438026s) [0] r=-1 lpr=108 pi=[67,108)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 163.090164185s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108 pruub=11.496438026s) [0] r=-1 lpr=108 pi=[67,108)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 163.090164185s@ mbc={}] exit Start 0.000020 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108 pruub=11.496438026s) [0] r=-1 lpr=108 pi=[67,108)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 163.090164185s@ mbc={}] enter Started/Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 83) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:31.587503+0000 osd.2 (osd.2) 82 : cluster [DBG] 7.1a deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:31.601493+0000 osd.2 (osd.2) 83 : cluster [DBG] 7.1a deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 614400 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=-1 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031074 7 0.000216
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=-1 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=-1 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=-1 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000133 1 0.000251
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=-1 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1c( v 41'581 (0'0,41'581] lb MIN local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=-1 lpr=107 DELETING pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.055804 2 0.000305
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1c( v 41'581 (0'0,41'581] lb MIN local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=-1 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.056129 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 108 pg[9.1c( v 41'581 (0'0,41'581] lb MIN local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=-1 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.087382 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:03.343077+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 109 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=-1 lpr=108 pi=[67,108)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026356 3 0.000108
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 109 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=-1 lpr=108 pi=[67,108)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.026431 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 109 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=-1 lpr=108 pi=[67,108)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 109 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 109 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Reset 0.000093 1 0.000135
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 109 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 109 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 109 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 109 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Start 0.000012 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 109 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 109 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 109 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 109 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000081 1 0.000106
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 109 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 109 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000039 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 109 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 109 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 109 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 598016 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:04.343207+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 109 handle_osd_map epochs [109,110], i have 109, src has [1,110]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=41'581 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 62.534857 130 0.000364
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active 62.544167 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary 63.552711 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=41'581 mlcod 0'0 active mbc={}] exit Started 63.552762 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=41'581 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003394 4 0.000117
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110 pruub=9.466191292s) [1] r=-1 lpr=110 pi=[67,110)/1 crt=41'581 mlcod 0'0 active pruub 163.090164185s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110 pruub=9.466117859s) [1] r=-1 lpr=110 pi=[67,110)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 163.090164185s@ mbc={}] exit Reset 0.000119 1 0.000205
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110 pruub=9.466117859s) [1] r=-1 lpr=110 pi=[67,110)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 163.090164185s@ mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110 pruub=9.466117859s) [1] r=-1 lpr=110 pi=[67,110)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 163.090164185s@ mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110 pruub=9.466117859s) [1] r=-1 lpr=110 pi=[67,110)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 163.090164185s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110 pruub=9.466117859s) [1] r=-1 lpr=110 pi=[67,110)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 163.090164185s@ mbc={}] exit Start 0.000007 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110 pruub=9.466117859s) [1] r=-1 lpr=110 pi=[67,110)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 163.090164185s@ mbc={}] enter Started/Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003681 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 598016 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 110 handle_osd_map epochs [108,110], i have 110, src has [1,110]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.273988 5 0.000800
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000107 1 0.000077
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000491 1 0.000062
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.054644 2 0.000040
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 110 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:05.343375+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fcee6000/0x0/0x4ffc00000, data 0xa48f5/0x145000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65413120 unmapped: 589824 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 110 handle_osd_map epochs [110,111], i have 110, src has [1,111]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.918292999s of 10.118635178s, submitted: 50
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 110 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=-1 lpr=110 pi=[67,110)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026572 3 0.000063
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=-1 lpr=110 pi=[67,110)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.026611 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=-1 lpr=110 pi=[67,110)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Reset 0.000078 1 0.000109
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Start 0.000008 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.698042 1 0.000125
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active 1.027699 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary 2.031614 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started 2.031682 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111 pruub=15.246043205s) [0] async=[0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 41'581 active pruub 169.898193359s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111 pruub=15.245329857s) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 169.898193359s@ mbc={}] exit Reset 0.000856 1 0.001021
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111 pruub=15.245329857s) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 169.898193359s@ mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111 pruub=15.245329857s) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 169.898193359s@ mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111 pruub=15.245329857s) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 169.898193359s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111 pruub=15.245329857s) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 169.898193359s@ mbc={}] exit Start 0.000105 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111 pruub=15.245329857s) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 169.898193359s@ mbc={}] enter Started/Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003006 2 0.000048
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000034 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 111 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 111 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 111 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:06.343737+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:35.619066+0000 osd.2 (osd.2) 84 : cluster [DBG] 3.1d scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:35.633160+0000 osd.2 (osd.2) 85 : cluster [DBG] 3.1d scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 754074 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 540672 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 111 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998705 3 0.000102
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.001848 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 85) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:35.619066+0000 osd.2 (osd.2) 84 : cluster [DBG] 3.1d scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:35.633160+0000 osd.2 (osd.2) 85 : cluster [DBG] 3.1d scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.007614 5 0.000424
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000125 1 0.000113
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000583 1 0.000070
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021917 7 0.000380
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.087342 2 0.000098
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.073577 1 0.000071
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: not registered w/ OSD
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] lb MIN local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=-1 lpr=111 DELETING pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.052015 2 0.000338
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] lb MIN local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.125733 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] lb MIN local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.147869 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: not registered w/ OSD
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:07.344040+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:36.591088+0000 osd.2 (osd.2) 86 : cluster [DBG] 3.1e scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:36.605301+0000 osd.2 (osd.2) 87 : cluster [DBG] 3.1e scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.15 deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.15 deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 483328 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 112 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.937553 1 0.000114
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active 1.033595 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary 2.035501 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started 2.035545 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Reset
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113 pruub=14.973883629s) [1] async=[1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 41'581 active pruub 171.660278320s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113 pruub=14.973669052s) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 171.660278320s@ mbc={}] exit Reset 0.000297 1 0.000426
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113 pruub=14.973669052s) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 171.660278320s@ mbc={}] enter Started
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113 pruub=14.973669052s) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 171.660278320s@ mbc={}] enter Start
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113 pruub=14.973669052s) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 171.660278320s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 87) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:36.591088+0000 osd.2 (osd.2) 86 : cluster [DBG] 3.1e scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:36.605301+0000 osd.2 (osd.2) 87 : cluster [DBG] 3.1e scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113 pruub=14.973669052s) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 171.660278320s@ mbc={}] exit Start 0.000104 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113 pruub=14.973669052s) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 171.660278320s@ mbc={}] enter Started/Stray
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:08.344585+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:37.577050+0000 osd.2 (osd.2) 88 : cluster [DBG] 8.15 deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:37.591282+0000 osd.2 (osd.2) 89 : cluster [DBG] 8.15 deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 475136 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 89) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:37.577050+0000 osd.2 (osd.2) 88 : cluster [DBG] 8.15 deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:37.591282+0000 osd.2 (osd.2) 89 : cluster [DBG] 8.15 deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:09.345210+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.559062 6 0.000318
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000777 1 0.000077
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] lb MIN local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=-1 lpr=113 DELETING pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.050146 3 0.000203
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] lb MIN local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.051003 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] lb MIN local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.610282 0 0.000000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcedb000/0x0/0x4ffc00000, data 0xab3b1/0x151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 434176 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:10.345698+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65609728 unmapped: 393216 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:11.346122+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 742780 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 401408 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:12.346365+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:41.474610+0000 osd.2 (osd.2) 90 : cluster [DBG] 11.15 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:41.488706+0000 osd.2 (osd.2) 91 : cluster [DBG] 11.15 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65609728 unmapped: 393216 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 91) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:41.474610+0000 osd.2 (osd.2) 90 : cluster [DBG] 11.15 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:41.488706+0000 osd.2 (osd.2) 91 : cluster [DBG] 11.15 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcedb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:13.346688+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65544192 unmapped: 458752 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcedb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:14.347100+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 450560 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:15.347447+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:44.508751+0000 osd.2 (osd.2) 92 : cluster [DBG] 8.11 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:44.526476+0000 osd.2 (osd.2) 93 : cluster [DBG] 8.11 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65560576 unmapped: 442368 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 93) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:44.508751+0000 osd.2 (osd.2) 92 : cluster [DBG] 8.11 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:44.526476+0000 osd.2 (osd.2) 93 : cluster [DBG] 8.11 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:16.347692+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:45.513163+0000 osd.2 (osd.2) 94 : cluster [DBG] 11.11 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:45.527365+0000 osd.2 (osd.2) 95 : cluster [DBG] 11.11 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.405490875s of 10.618983269s, submitted: 38
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746225 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 425984 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 95) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:45.513163+0000 osd.2 (osd.2) 94 : cluster [DBG] 11.11 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:45.527365+0000 osd.2 (osd.2) 95 : cluster [DBG] 11.11 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:17.348004+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:46.552664+0000 osd.2 (osd.2) 96 : cluster [DBG] 8.12 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:46.566831+0000 osd.2 (osd.2) 97 : cluster [DBG] 8.12 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 425984 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 97) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:46.552664+0000 osd.2 (osd.2) 96 : cluster [DBG] 8.12 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:46.566831+0000 osd.2 (osd.2) 97 : cluster [DBG] 8.12 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:18.348266+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcedb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65585152 unmapped: 417792 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:19.348522+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:48.505433+0000 osd.2 (osd.2) 98 : cluster [DBG] 3.18 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:48.518766+0000 osd.2 (osd.2) 99 : cluster [DBG] 3.18 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcedb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 409600 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 99) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:48.505433+0000 osd.2 (osd.2) 98 : cluster [DBG] 3.18 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:48.518766+0000 osd.2 (osd.2) 99 : cluster [DBG] 3.18 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:20.348774+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:49.465899+0000 osd.2 (osd.2) 100 : cluster [DBG] 7.1c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:49.480226+0000 osd.2 (osd.2) 101 : cluster [DBG] 7.1c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 409600 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 101) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:49.465899+0000 osd.2 (osd.2) 100 : cluster [DBG] 7.1c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:49.480226+0000 osd.2 (osd.2) 101 : cluster [DBG] 7.1c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:21.349029+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 748521 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 409600 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:22.349197+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 401408 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:23.349534+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:52.476720+0000 osd.2 (osd.2) 102 : cluster [DBG] 7.2 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:52.490850+0000 osd.2 (osd.2) 103 : cluster [DBG] 7.2 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 401408 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 103) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:52.476720+0000 osd.2 (osd.2) 102 : cluster [DBG] 7.2 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:52.490850+0000 osd.2 (osd.2) 103 : cluster [DBG] 7.2 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:24.349730+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65609728 unmapped: 393216 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcedb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:25.349904+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65609728 unmapped: 393216 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:26.350066+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:55.498397+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.1 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:55.512534+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.1 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 750815 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 385024 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 105) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:55.498397+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.1 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:55.512534+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.1 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:27.350245+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.b scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.958892822s of 10.998286247s, submitted: 10
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.b scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 352256 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:28.350399+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:57.551158+0000 osd.2 (osd.2) 106 : cluster [DBG] 11.b scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:57.565258+0000 osd.2 (osd.2) 107 : cluster [DBG] 11.b scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 352256 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:29.350663+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 4 last_log 109 sent 107 num 4 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:58.543885+0000 osd.2 (osd.2) 108 : cluster [DBG] 7.5 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:58.557977+0000 osd.2 (osd.2) 109 : cluster [DBG] 7.5 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 107) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:57.551158+0000 osd.2 (osd.2) 106 : cluster [DBG] 11.b scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:57.565258+0000 osd.2 (osd.2) 107 : cluster [DBG] 11.b scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 109) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:58.543885+0000 osd.2 (osd.2) 108 : cluster [DBG] 7.5 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:58.557977+0000 osd.2 (osd.2) 109 : cluster [DBG] 7.5 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 344064 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:30.350909+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 344064 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:31.351069+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 753110 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 335872 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:32.351317+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 327680 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:33.351562+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 327680 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:34.351706+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.d scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.d scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 319488 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:35.351962+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:04.481501+0000 osd.2 (osd.2) 110 : cluster [DBG] 11.d scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:04.495622+0000 osd.2 (osd.2) 111 : cluster [DBG] 11.d scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 111) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:04.481501+0000 osd.2 (osd.2) 110 : cluster [DBG] 11.d scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:04.495622+0000 osd.2 (osd.2) 111 : cluster [DBG] 11.d scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 319488 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:36.352160+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 754258 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 311296 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:37.352378+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65699840 unmapped: 303104 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:38.352605+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 294912 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:39.352764+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 294912 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:40.352853+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.896081924s of 12.917864799s, submitted: 6
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 286720 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:41.352971+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:10.469120+0000 osd.2 (osd.2) 112 : cluster [DBG] 11.9 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:10.486821+0000 osd.2 (osd.2) 113 : cluster [DBG] 11.9 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 113) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:10.469120+0000 osd.2 (osd.2) 112 : cluster [DBG] 11.9 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:10.486821+0000 osd.2 (osd.2) 113 : cluster [DBG] 11.9 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 756553 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 286720 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:42.353171+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:11.426990+0000 osd.2 (osd.2) 114 : cluster [DBG] 7.e scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:11.441118+0000 osd.2 (osd.2) 115 : cluster [DBG] 7.e scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 115) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:11.426990+0000 osd.2 (osd.2) 114 : cluster [DBG] 7.e scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:11.441118+0000 osd.2 (osd.2) 115 : cluster [DBG] 7.e scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 278528 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:43.353383+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:12.456565+0000 osd.2 (osd.2) 116 : cluster [DBG] 7.c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:12.470558+0000 osd.2 (osd.2) 117 : cluster [DBG] 7.c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 117) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:12.456565+0000 osd.2 (osd.2) 116 : cluster [DBG] 7.c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:12.470558+0000 osd.2 (osd.2) 117 : cluster [DBG] 7.c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 270336 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:44.353535+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 270336 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:45.353825+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 270336 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:46.353947+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 757700 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 270336 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:47.354076+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 245760 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:48.354213+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 245760 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:49.354386+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 229376 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:50.354529+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:19.469498+0000 osd.2 (osd.2) 118 : cluster [DBG] 11.3 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:19.483592+0000 osd.2 (osd.2) 119 : cluster [DBG] 11.3 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.956985474s of 10.002155304s, submitted: 10
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 119) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:19.469498+0000 osd.2 (osd.2) 118 : cluster [DBG] 11.3 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:19.483592+0000 osd.2 (osd.2) 119 : cluster [DBG] 11.3 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 229376 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:51.355015+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:20.456889+0000 osd.2 (osd.2) 120 : cluster [DBG] 11.2 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:20.470978+0000 osd.2 (osd.2) 121 : cluster [DBG] 11.2 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 761144 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 121) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:20.456889+0000 osd.2 (osd.2) 120 : cluster [DBG] 11.2 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:20.470978+0000 osd.2 (osd.2) 121 : cluster [DBG] 11.2 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 221184 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:52.355218+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:21.504797+0000 osd.2 (osd.2) 122 : cluster [DBG] 11.8 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:21.518897+0000 osd.2 (osd.2) 123 : cluster [DBG] 11.8 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 123) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:21.504797+0000 osd.2 (osd.2) 122 : cluster [DBG] 11.8 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:21.518897+0000 osd.2 (osd.2) 123 : cluster [DBG] 11.8 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 221184 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:53.355442+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:22.533667+0000 osd.2 (osd.2) 124 : cluster [DBG] 8.2 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:22.547784+0000 osd.2 (osd.2) 125 : cluster [DBG] 8.2 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 125) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:22.533667+0000 osd.2 (osd.2) 124 : cluster [DBG] 8.2 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:22.547784+0000 osd.2 (osd.2) 125 : cluster [DBG] 8.2 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 212992 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:54.356141+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 212992 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:55.356280+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 212992 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:56.356432+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 763438 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:57.356559+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:26.608859+0000 osd.2 (osd.2) 126 : cluster [DBG] 7.8 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:26.622950+0000 osd.2 (osd.2) 127 : cluster [DBG] 7.8 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 127) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:26.608859+0000 osd.2 (osd.2) 126 : cluster [DBG] 7.8 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:26.622950+0000 osd.2 (osd.2) 127 : cluster [DBG] 7.8 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 204800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:58.356745+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.d scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.d scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 1253376 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:59.356864+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:28.669354+0000 osd.2 (osd.2) 128 : cluster [DBG] 8.d scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:28.683345+0000 osd.2 (osd.2) 129 : cluster [DBG] 8.d scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 129) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:28.669354+0000 osd.2 (osd.2) 128 : cluster [DBG] 8.d scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:28.683345+0000 osd.2 (osd.2) 129 : cluster [DBG] 8.d scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 1245184 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:00.357046+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 1245184 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:01.357177+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.e scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.152532578s of 11.190190315s, submitted: 8
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.e scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765732 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1236992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:02.357317+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:31.661526+0000 osd.2 (osd.2) 130 : cluster [DBG] 3.e scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:31.675674+0000 osd.2 (osd.2) 131 : cluster [DBG] 3.e scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 131) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:31.661526+0000 osd.2 (osd.2) 130 : cluster [DBG] 3.e scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:31.675674+0000 osd.2 (osd.2) 131 : cluster [DBG] 3.e scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1236992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:03.357569+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1236992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:04.357690+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:05.357859+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1228800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:06.357982+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1228800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.a deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.a deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766879 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:07.358151+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:36.669792+0000 osd.2 (osd.2) 132 : cluster [DBG] 7.a deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:36.683896+0000 osd.2 (osd.2) 133 : cluster [DBG] 7.a deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1204224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 133) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:36.669792+0000 osd.2 (osd.2) 132 : cluster [DBG] 7.a deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:36.683896+0000 osd.2 (osd.2) 133 : cluster [DBG] 7.a deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:08.358379+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 1187840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:09.358575+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1171456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:10.358737+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1171456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:11.358965+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:40.749254+0000 osd.2 (osd.2) 134 : cluster [DBG] 8.4 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:40.763599+0000 osd.2 (osd.2) 135 : cluster [DBG] 8.4 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1171456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768026 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:12.359231+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 135) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:40.749254+0000 osd.2 (osd.2) 134 : cluster [DBG] 8.4 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:40.763599+0000 osd.2 (osd.2) 135 : cluster [DBG] 8.4 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1163264 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:13.359406+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 1155072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:14.359560+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 1155072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:15.359750+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 1146880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.002936363s of 14.024413109s, submitted: 6
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:16.360118+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:45.685970+0000 osd.2 (osd.2) 136 : cluster [DBG] 8.1b scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:45.700065+0000 osd.2 (osd.2) 137 : cluster [DBG] 8.1b scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 1146880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 137) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:45.685970+0000 osd.2 (osd.2) 136 : cluster [DBG] 8.1b scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:45.700065+0000 osd.2 (osd.2) 137 : cluster [DBG] 8.1b scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 769174 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:17.360375+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 1138688 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:18.360550+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 1138688 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:19.360683+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 1138688 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:20.360782+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 1130496 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:21.360943+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 1130496 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 770323 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:22.361056+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:51.627977+0000 osd.2 (osd.2) 138 : cluster [DBG] 11.1a scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:51.642068+0000 osd.2 (osd.2) 139 : cluster [DBG] 11.1a scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1114112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 139) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:51.627977+0000 osd.2 (osd.2) 138 : cluster [DBG] 11.1a scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:51.642068+0000 osd.2 (osd.2) 139 : cluster [DBG] 11.1a scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:23.361229+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1114112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:24.361438+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1114112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:25.361613+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:54.568704+0000 osd.2 (osd.2) 140 : cluster [DBG] 3.11 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:54.582800+0000 osd.2 (osd.2) 141 : cluster [DBG] 3.11 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1105920 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 141) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:54.568704+0000 osd.2 (osd.2) 140 : cluster [DBG] 3.11 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:54.582800+0000 osd.2 (osd.2) 141 : cluster [DBG] 3.11 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:26.361811+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:55.596724+0000 osd.2 (osd.2) 142 : cluster [DBG] 11.1c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:55.610844+0000 osd.2 (osd.2) 143 : cluster [DBG] 11.1c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 1089536 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 143) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:55.596724+0000 osd.2 (osd.2) 142 : cluster [DBG] 11.1c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:55.610844+0000 osd.2 (osd.2) 143 : cluster [DBG] 11.1c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 772620 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:27.362090+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 1081344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:28.362234+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1064960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.886725426s of 12.919322014s, submitted: 8
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:29.362398+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:58.605284+0000 osd.2 (osd.2) 144 : cluster [DBG] 11.1b scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:58.619378+0000 osd.2 (osd.2) 145 : cluster [DBG] 11.1b scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1064960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 145) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:58.605284+0000 osd.2 (osd.2) 144 : cluster [DBG] 11.1b scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:58.619378+0000 osd.2 (osd.2) 145 : cluster [DBG] 11.1b scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:30.362561+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1056768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.18 deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.18 deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:31.362766+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:00.531863+0000 osd.2 (osd.2) 146 : cluster [DBG] 11.18 deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:00.546030+0000 osd.2 (osd.2) 147 : cluster [DBG] 11.18 deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1056768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 147) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:00.531863+0000 osd.2 (osd.2) 146 : cluster [DBG] 11.18 deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:00.546030+0000 osd.2 (osd.2) 147 : cluster [DBG] 11.18 deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 776067 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:32.362914+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:01.482562+0000 osd.2 (osd.2) 148 : cluster [DBG] 11.1e scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:01.496678+0000 osd.2 (osd.2) 149 : cluster [DBG] 11.1e scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 1032192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 149) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:01.482562+0000 osd.2 (osd.2) 148 : cluster [DBG] 11.1e scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:01.496678+0000 osd.2 (osd.2) 149 : cluster [DBG] 11.1e scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:33.363072+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 1032192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:34.363194+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 1024000 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:35.363411+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1015808 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:36.363571+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1015808 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 776067 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:37.363746+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1007616 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:38.363912+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1007616 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:39.364072+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 999424 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:40.364225+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 999424 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.050096512s of 12.069946289s, submitted: 6
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:41.364387+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:10.675349+0000 osd.2 (osd.2) 150 : cluster [DBG] 7.11 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:10.689393+0000 osd.2 (osd.2) 151 : cluster [DBG] 7.11 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 999424 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 151) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:10.675349+0000 osd.2 (osd.2) 150 : cluster [DBG] 7.11 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:10.689393+0000 osd.2 (osd.2) 151 : cluster [DBG] 7.11 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 777215 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:42.364595+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 991232 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:43.364731+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 991232 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:44.365261+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:13.702429+0000 osd.2 (osd.2) 152 : cluster [DBG] 3.16 deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:13.716500+0000 osd.2 (osd.2) 153 : cluster [DBG] 3.16 deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 983040 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 153) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:13.702429+0000 osd.2 (osd.2) 152 : cluster [DBG] 3.16 deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:13.716500+0000 osd.2 (osd.2) 153 : cluster [DBG] 3.16 deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:45.366053+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 974848 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:46.366236+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:15.766211+0000 osd.2 (osd.2) 154 : cluster [DBG] 8.1c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:15.780355+0000 osd.2 (osd.2) 155 : cluster [DBG] 8.1c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 974848 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 155) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:15.766211+0000 osd.2 (osd.2) 154 : cluster [DBG] 8.1c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:15.780355+0000 osd.2 (osd.2) 155 : cluster [DBG] 8.1c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 779511 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:47.366497+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 966656 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:48.366778+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:17.716397+0000 osd.2 (osd.2) 156 : cluster [DBG] 11.12 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:17.730507+0000 osd.2 (osd.2) 157 : cluster [DBG] 11.12 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 942080 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 157) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:17.716397+0000 osd.2 (osd.2) 156 : cluster [DBG] 11.12 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:17.730507+0000 osd.2 (osd.2) 157 : cluster [DBG] 11.12 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:49.367077+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 933888 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:50.367410+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 925696 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:51.367595+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 925696 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 780660 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:52.367755+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 917504 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:53.367897+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 917504 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.865617752s of 12.977084160s, submitted: 8
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:54.368058+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:23.652423+0000 osd.2 (osd.2) 158 : cluster [DBG] 7.15 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:23.666539+0000 osd.2 (osd.2) 159 : cluster [DBG] 7.15 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 917504 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 159) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:23.652423+0000 osd.2 (osd.2) 158 : cluster [DBG] 7.15 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:23.666539+0000 osd.2 (osd.2) 159 : cluster [DBG] 7.15 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:55.368251+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 909312 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:56.368422+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:25.696892+0000 osd.2 (osd.2) 160 : cluster [DBG] 11.1f scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:25.710973+0000 osd.2 (osd.2) 161 : cluster [DBG] 11.1f scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 909312 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782957 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 161) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:25.696892+0000 osd.2 (osd.2) 160 : cluster [DBG] 11.1f scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:25.710973+0000 osd.2 (osd.2) 161 : cluster [DBG] 11.1f scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:57.368805+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 901120 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 6.8 deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 6.8 deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:58.369050+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:27.668736+0000 osd.2 (osd.2) 162 : cluster [DBG] 6.8 deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:27.682807+0000 osd.2 (osd.2) 163 : cluster [DBG] 6.8 deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 892928 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 163) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:27.668736+0000 osd.2 (osd.2) 162 : cluster [DBG] 6.8 deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:27.682807+0000 osd.2 (osd.2) 163 : cluster [DBG] 6.8 deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:59.369240+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 892928 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:00.369444+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:01.369588+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.e deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.e deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 785251 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:02.369740+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:31.693506+0000 osd.2 (osd.2) 164 : cluster [DBG] 9.e deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:31.728965+0000 osd.2 (osd.2) 165 : cluster [DBG] 9.e deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 860160 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 165) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:31.693506+0000 osd.2 (osd.2) 164 : cluster [DBG] 9.e deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:31.728965+0000 osd.2 (osd.2) 165 : cluster [DBG] 9.e deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:03.369926+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 860160 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.987515450s of 10.023759842s, submitted: 8
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:04.370036+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:33.676091+0000 osd.2 (osd.2) 166 : cluster [DBG] 9.6 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:33.711451+0000 osd.2 (osd.2) 167 : cluster [DBG] 9.6 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 167) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:33.676091+0000 osd.2 (osd.2) 166 : cluster [DBG] 9.6 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:33.711451+0000 osd.2 (osd.2) 167 : cluster [DBG] 9.6 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:05.370262+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:06.370404+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 786398 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:07.370563+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 843776 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:08.370708+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:37.689925+0000 osd.2 (osd.2) 168 : cluster [DBG] 9.17 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:37.718182+0000 osd.2 (osd.2) 169 : cluster [DBG] 9.17 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 169) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:37.689925+0000 osd.2 (osd.2) 168 : cluster [DBG] 9.17 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:37.718182+0000 osd.2 (osd.2) 169 : cluster [DBG] 9.17 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 827392 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:09.370922+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 827392 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.f scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.f scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:10.371256+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:39.739060+0000 osd.2 (osd.2) 170 : cluster [DBG] 9.f scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:39.777826+0000 osd.2 (osd.2) 171 : cluster [DBG] 9.f scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 171) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:39.739060+0000 osd.2 (osd.2) 170 : cluster [DBG] 9.f scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:39.777826+0000 osd.2 (osd.2) 171 : cluster [DBG] 9.f scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 819200 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:11.371445+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 819200 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.7 deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.7 deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 789840 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:12.371566+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:41.773942+0000 osd.2 (osd.2) 172 : cluster [DBG] 9.7 deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:41.809549+0000 osd.2 (osd.2) 173 : cluster [DBG] 9.7 deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 173) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:41.773942+0000 osd.2 (osd.2) 172 : cluster [DBG] 9.7 deep-scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:41.809549+0000 osd.2 (osd.2) 173 : cluster [DBG] 9.7 deep-scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 811008 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:13.371824+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 811008 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:14.372039+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 802816 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.110040665s of 11.147427559s, submitted: 8
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:15.372214+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:44.823587+0000 osd.2 (osd.2) 174 : cluster [DBG] 9.18 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:44.855387+0000 osd.2 (osd.2) 175 : cluster [DBG] 9.18 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 786432 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 175) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:44.823587+0000 osd.2 (osd.2) 174 : cluster [DBG] 9.18 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:44.855387+0000 osd.2 (osd.2) 175 : cluster [DBG] 9.18 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:16.372373+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:45.865396+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.8 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:45.911367+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.8 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 778240 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 177) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:45.865396+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.8 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:45.911367+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.8 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 792135 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:17.372517+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 778240 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:18.372673+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:47.918770+0000 osd.2 (osd.2) 178 : cluster [DBG] 9.c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:47.950555+0000 osd.2 (osd.2) 179 : cluster [DBG] 9.c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 770048 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 179) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:47.918770+0000 osd.2 (osd.2) 178 : cluster [DBG] 9.c scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:47.950555+0000 osd.2 (osd.2) 179 : cluster [DBG] 9.c scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:19.373005+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:20.373376+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:49.946942+0000 osd.2 (osd.2) 180 : cluster [DBG] 6.f scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:49.971567+0000 osd.2 (osd.2) 181 : cluster [DBG] 6.f scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 181) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:49.946942+0000 osd.2 (osd.2) 180 : cluster [DBG] 6.f scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:49.971567+0000 osd.2 (osd.2) 181 : cluster [DBG] 6.f scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:21.373512+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 794429 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:22.373695+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:23.373911+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 737280 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:24.374024+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 737280 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:25.374202+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 737280 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.128871918s of 11.158221245s, submitted: 8
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:26.374371+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:55.981931+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.13 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:56.013747+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.13 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 183) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:55.981931+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.13 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:56.013747+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.13 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:27.374561+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 795577 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 720896 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:28.374799+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 720896 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:29.374913+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:59.004520+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.19 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:59.057445+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.19 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 712704 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 185) v1
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:59.004520+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.19 scrub starts
Oct 11 04:55:07 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:59.057445+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.19 scrub ok
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:30.375058+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 712704 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:31.375178+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 704512 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:32.375311+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 704512 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:33.375424+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 704512 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:34.375600+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:35.375861+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:36.376004+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:37.376186+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 688128 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:38.376406+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 688128 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:39.376615+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 671744 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:40.376781+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 671744 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:41.376924+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 663552 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:42.377075+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 655360 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:43.377315+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 655360 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:44.377538+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:45.377779+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:46.377934+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:47.378115+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:48.378298+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 638976 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:49.378601+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 638976 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:50.378892+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:51.379124+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:52.379302+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:53.379469+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:54.379622+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:55.379821+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 614400 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:56.379946+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 614400 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:57.380165+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:58.380424+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:59.380583+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:00.380796+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 598016 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:01.380983+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 598016 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:02.381157+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:03.381310+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:04.381460+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:05.381755+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:06.381882+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:07.382034+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:08.382182+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 565248 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:09.382309+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 565248 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:10.382433+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 557056 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:11.382578+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 557056 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:12.382765+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:13.383028+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:14.383129+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:15.383311+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:16.383460+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:17.383628+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:18.383747+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:19.383895+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:20.384033+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:21.384216+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 524288 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:22.384447+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 524288 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:23.384621+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 524288 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:24.384756+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:25.384955+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:26.385127+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:27.385305+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:28.385488+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:29.385651+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:30.385804+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:31.386011+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:32.386203+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:33.386431+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:34.386824+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 491520 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:35.387053+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 491520 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:36.387231+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 483328 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:37.387434+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 483328 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:38.387677+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 483328 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:39.387834+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:40.388092+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:41.388284+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:42.388419+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:43.388598+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:44.388771+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 458752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:45.388962+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 458752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:46.389085+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 458752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:47.389267+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 450560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:48.389459+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 450560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:49.389604+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 450560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:50.389783+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:51.389945+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:52.390123+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:53.390266+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 434176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:54.390393+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 434176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:55.390609+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 425984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:56.390809+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 425984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:57.390967+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 417792 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:58.391087+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 417792 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:59.391221+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 417792 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:00.391390+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:01.391540+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:02.391830+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:03.392018+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:04.392207+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:05.392398+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 393216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:06.392626+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 393216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:07.392807+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 393216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:08.392965+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 385024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:09.393164+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 385024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:10.393378+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:11.393519+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:12.393640+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:13.393807+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:14.394039+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:15.394249+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:16.394422+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:17.394560+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:18.394677+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 352256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:19.394817+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 352256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:20.394945+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 352256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:21.395078+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 344064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:22.395240+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 344064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:23.395428+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:24.395592+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:25.395790+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:26.395955+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 327680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:27.396111+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 327680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:28.396234+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 327680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:29.396404+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:30.396525+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:31.396656+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:32.396843+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:33.397023+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 311296 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:34.397279+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 311296 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:35.397517+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 303104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:36.397658+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 303104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:37.397799+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:38.397983+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:39.398116+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:40.398245+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 286720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:41.398399+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 286720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:42.398582+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 286720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:43.398781+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 278528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:44.398906+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 278528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:45.399118+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 270336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:46.399274+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 270336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:47.399409+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 262144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:48.399551+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 253952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:49.399674+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 253952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:50.399859+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 253952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:51.400008+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 245760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:52.400151+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 245760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:53.400388+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:54.400530+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:55.400758+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:56.401681+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 229376 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:57.401880+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:58.402117+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:59.402308+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 204800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:00.402543+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 204800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:01.402745+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 196608 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:02.402898+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 196608 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:03.403066+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 196608 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:04.403272+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 188416 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:05.403517+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 188416 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:06.403691+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 188416 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:07.403950+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:08.404135+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:09.404273+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:10.405465+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:11.405625+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:12.405789+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:13.405991+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:14.406131+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:15.406324+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:16.406529+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:17.406706+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 147456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:18.406856+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 147456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:19.407055+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 139264 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:20.407320+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 139264 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:21.407521+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:22.407633+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:23.407785+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:24.407931+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:25.408185+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:26.408391+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 147456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:27.408513+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 147456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:28.408635+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 139264 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:29.408787+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 139264 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:30.408963+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 139264 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:31.409105+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:32.409262+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:33.409390+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:34.409561+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 122880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:35.409708+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 122880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:36.409958+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 114688 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:37.410156+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 114688 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:38.410301+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 114688 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:39.410396+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 106496 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:40.410542+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 106496 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:41.410665+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 106496 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:42.410807+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 98304 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:43.411021+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 98304 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:44.411164+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 90112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:45.411371+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 90112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:46.411580+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 90112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:47.411733+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 90112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:48.411873+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 90112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:49.411985+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 81920 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:50.412134+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 81920 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:51.412297+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 81920 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:52.412481+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 73728 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:53.412627+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 73728 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:54.412735+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 73728 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:55.412918+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 65536 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:56.413070+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 65536 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:57.413322+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 57344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:58.413511+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 57344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:59.413667+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 57344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:00.413808+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:01.414052+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:02.414162+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 40960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:03.414284+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 32768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:04.414421+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 32768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:05.414592+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:06.414732+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:07.414921+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:08.415106+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:09.415253+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:10.415455+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:11.415597+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:12.415741+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:13.415886+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:14.416028+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:15.416296+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:16.416508+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:17.416624+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 0 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:18.416740+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 0 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:19.416901+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 0 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:20.417012+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:21.417162+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:22.417313+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:23.417518+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:24.417675+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:25.417891+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:26.418054+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:27.418190+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:28.418418+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:29.418582+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:30.418714+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:31.418834+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:32.418982+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:33.419189+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:34.419372+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:35.419514+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:36.419631+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:37.419759+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:38.419897+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:39.420064+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:40.420229+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:41.420394+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:42.420567+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:43.420726+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:44.420872+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:45.421083+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 958464 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:46.422699+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 958464 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:47.422830+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 958464 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:48.422938+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:49.423086+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:50.423211+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:51.423369+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:52.423488+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:53.423637+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:54.423862+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:55.423998+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:56.424148+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:57.424313+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:58.424577+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:59.424745+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:00.424894+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:01.425031+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:02.425198+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:03.425368+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:04.425527+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:05.425731+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:06.425885+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:07.426078+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:08.426223+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:09.426388+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:10.426535+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:11.426687+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:12.426800+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:13.426942+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 835584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:14.427121+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 835584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:15.427380+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 827392 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:16.427502+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 827392 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:17.427685+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:18.427844+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:19.427968+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:20.428142+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 811008 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:21.428385+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 811008 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:22.428540+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 811008 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:23.428709+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:24.428859+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:25.429053+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 794624 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:26.429201+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:27.429347+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:28.429468+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:29.429582+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:30.429692+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5376 writes, 23K keys, 5376 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5376 writes, 765 syncs, 7.03 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5376 writes, 23K keys, 5376 commit groups, 1.0 writes per commit group, ingest: 18.24 MB, 0.03 MB/s
                                           Interval WAL: 5376 writes, 765 syncs, 7.03 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:31.429848+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 720896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:32.430024+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:33.430201+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:34.430456+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:35.430724+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:36.430916+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:37.431046+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:38.431185+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:39.431382+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 688128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:40.431534+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 688128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:41.431815+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:42.432053+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:43.432288+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:44.432570+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:45.432926+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:46.433150+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:47.433426+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:48.433622+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:49.433848+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 655360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:50.434071+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 655360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:51.434453+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:52.434709+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:53.434961+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:54.435200+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:55.435416+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:56.435663+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:57.435923+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:58.436219+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:59.436501+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:00.436791+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:01.437074+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:02.437260+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:03.438713+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:04.438952+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:05.439267+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 614400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:06.439524+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 606208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:07.439712+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 606208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:08.439961+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 598016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:09.440138+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 598016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:10.440311+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:11.440493+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:12.440656+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:13.440793+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 581632 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:14.441055+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 581632 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:15.441222+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:16.441412+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:17.441727+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:18.441875+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:19.442107+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:20.442255+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:21.442407+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 557056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:22.442609+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 557056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:23.442784+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:24.442905+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:25.443121+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 360.115478516s of 360.131134033s, submitted: 4
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:26.443250+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 352256 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:27.443406+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:28.443574+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:29.443724+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:30.443891+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:31.444045+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:32.444222+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:33.444381+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:34.444529+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:35.444736+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:36.444926+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:37.445161+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:38.445383+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:39.445514+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:40.445649+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:41.445834+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:42.445990+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:43.446159+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:44.446318+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:45.446528+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:46.446661+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:47.446784+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:48.446885+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:49.446991+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:50.447104+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:51.447212+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:52.447349+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:53.447489+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:54.447670+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:55.447903+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:56.448118+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:57.448301+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:58.448421+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:59.448548+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:00.448704+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:01.448873+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:02.449036+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 163840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:03.449223+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 163840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:04.449371+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:05.449521+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:06.449650+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:07.449816+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:08.449981+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:09.450186+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:10.450380+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:11.450556+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:12.450730+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 131072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:13.450898+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 131072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:14.451057+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:15.451260+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:16.451411+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:17.451543+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:18.451686+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:19.451816+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:20.451953+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:21.452106+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:22.452272+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:23.452419+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:24.452654+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:25.452885+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:26.453052+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:27.453168+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:28.453282+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:29.453424+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:30.453574+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:31.453709+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:32.453820+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:33.453939+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:34.454054+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:35.454213+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:36.454367+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:37.454558+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:38.454685+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:39.454843+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:40.455013+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:41.455196+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:42.455411+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:43.455563+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:44.455728+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:45.455928+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:46.456097+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:47.456292+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:48.456495+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:49.456669+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:50.456872+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:51.457099+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:52.457268+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:53.457412+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:54.457632+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:55.457848+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:56.457988+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:57.458157+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:58.458410+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:59.458621+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:00.458774+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:01.458933+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:02.459096+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:03.459239+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:04.459400+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:05.459561+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:06.460152+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:07.460289+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:08.460461+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:09.460667+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:10.460853+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:11.461016+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:12.461145+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:13.461354+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:14.461597+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:15.461745+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:16.461881+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:17.462399+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:18.462613+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:19.462734+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:20.462899+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:21.463035+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:22.463170+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:23.463351+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:24.463461+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:25.463635+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:26.463769+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:27.464056+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:28.464494+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:29.464795+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:30.464987+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:31.465162+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:32.465374+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:33.465526+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:34.465852+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:35.466045+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:36.466199+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:37.466466+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:38.466616+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:39.466788+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:40.466919+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:41.467072+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:42.467256+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:43.467409+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:44.467584+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:45.467759+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:46.467949+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:47.468107+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:48.468233+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:49.468539+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:50.468985+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:51.469166+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:52.469372+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:53.469520+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:54.469816+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:55.469988+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:56.470205+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:57.470406+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:58.470641+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:59.470854+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:00.471091+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:01.471244+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:02.471411+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:03.471560+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:04.471666+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:05.471823+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:06.471986+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:07.473399+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:08.473521+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:09.473659+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:10.473792+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:11.473952+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:12.474125+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:13.474303+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:14.474486+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:15.474631+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:16.474763+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:17.474969+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:18.475089+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:19.475311+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:20.475564+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:21.475692+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:22.475888+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:23.476039+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:24.476194+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:25.476369+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:26.476509+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:27.476637+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:28.476727+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:29.476850+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:30.476965+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:31.477080+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:32.477204+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:33.477398+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:34.477521+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:35.477668+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:36.477805+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:37.477997+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:38.478122+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:39.478272+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:40.478392+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:41.478515+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:42.478685+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:43.478858+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:44.479076+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:45.479312+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:46.479449+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:47.479605+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:48.479751+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:49.479916+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:50.480087+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:51.480279+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:52.480408+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:53.480570+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:54.480833+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:55.481401+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:56.485024+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:57.485267+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:58.485415+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:59.485546+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:00.485678+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:01.485838+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:02.486089+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:03.486270+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:04.486418+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:05.486608+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:06.486730+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:07.486834+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:08.486948+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:09.487182+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:10.487308+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:11.487399+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:12.487548+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:13.487689+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:14.487801+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:15.487957+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:16.488104+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:17.488392+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:18.488584+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:19.488745+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:20.488911+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:21.489096+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:22.489235+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:23.489371+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:24.489477+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:25.489630+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:26.489776+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:27.489930+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:28.490086+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:29.490242+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:30.490422+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:31.490565+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:32.490707+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:33.490837+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:34.490990+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:35.491205+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:36.491386+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:37.491517+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:38.491601+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:39.491781+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:40.491921+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:41.492035+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:42.492173+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:43.492309+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:44.492524+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:45.492715+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:46.492856+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:47.493180+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:48.493503+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:49.493651+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:50.493806+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:51.493940+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:52.494120+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:53.494286+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:54.494441+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:55.494626+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:56.494739+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:57.494862+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:58.494990+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:59.495135+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:00.495408+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:01.495783+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:02.496004+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:03.496191+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:04.496372+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:05.496579+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:06.496828+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:07.497042+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:08.497248+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:09.497472+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:10.497722+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:11.497844+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:12.497983+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:13.498164+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:14.498399+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:15.498621+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:16.498757+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:17.498864+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:18.498963+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:19.499096+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:20.499231+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:21.499357+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:22.499515+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:23.499654+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:24.499821+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:25.500030+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:26.500172+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:27.500426+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:28.500586+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:29.500775+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:30.500936+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:31.501098+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:32.501252+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:33.501413+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:34.501605+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:35.501808+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:36.501998+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:37.502144+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:38.502320+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:39.502483+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:40.502613+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:41.502695+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:42.502826+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:43.502915+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:44.503134+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:45.503304+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:46.503385+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:47.503534+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:48.503679+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:49.503806+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:50.503950+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:51.504112+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:52.504251+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:53.504425+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:54.504563+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:55.504759+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:56.504953+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:57.505089+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:58.505235+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:59.505387+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:00.505540+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:01.505702+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:02.505817+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:03.505969+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:04.506151+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:05.506380+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:06.506527+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:07.506676+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:08.506809+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:09.506985+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:10.507126+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:11.507262+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:12.507414+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:13.507545+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:14.507659+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:15.507825+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:16.507967+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:17.508115+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:18.508263+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:19.508439+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:20.508579+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:21.508733+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:22.508874+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:23.509050+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:24.509201+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:25.509356+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:26.509470+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:27.509610+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:28.509724+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:29.509871+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:30.510038+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:31.510162+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:32.510280+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:33.510404+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:34.510553+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:35.510708+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:36.510844+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:37.510975+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:38.511188+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:39.511342+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:40.511473+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:41.511611+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:42.511774+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:43.511930+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:44.512079+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:45.512172+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:46.512253+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:47.512383+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:48.512564+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:49.512684+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:50.512843+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:51.513225+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:52.513400+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:53.513609+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:54.513764+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:55.513948+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:56.514465+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:57.514604+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:58.514764+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:59.514922+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:00.515080+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:01.515228+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:02.515377+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:03.515511+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:04.515653+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:05.515848+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:06.516023+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:07.516168+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:08.516280+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:09.516417+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:10.516548+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:11.516670+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:12.516825+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:13.516935+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:14.517141+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:15.517439+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:16.517557+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:17.517676+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:18.517798+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:19.517915+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:20.518100+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:21.518252+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:22.518385+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:23.518506+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:24.518694+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:25.518864+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:26.519042+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:27.519168+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:28.519300+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:29.519444+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:30.519578+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:31.519728+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:32.519899+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:33.520031+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:34.520146+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:35.520296+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:36.520482+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:37.521144+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:38.521366+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:39.521493+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:40.521652+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:41.521792+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:42.521913+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:43.522034+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:44.522181+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:45.522419+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:46.522568+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:47.522702+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:48.522820+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:49.522933+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:50.523093+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:51.523248+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:52.523435+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:53.523778+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:54.523943+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:55.524177+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:56.524371+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:57.524498+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:58.524671+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:59.524794+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:00.524971+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:01.525140+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:02.525291+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:03.525386+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:04.525542+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:05.525723+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:06.525883+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:07.526038+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:08.526175+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:09.526325+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:10.526495+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:11.526657+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:12.530474+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:13.530634+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:14.530755+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:15.530926+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:16.531090+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:17.531213+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:18.531349+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:19.531458+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:20.531571+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:21.531705+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:22.531853+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:23.532410+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:24.532675+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:25.532882+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:26.533056+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:27.533777+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:28.534008+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:29.534429+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:30.534622+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:31.534837+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:32.535020+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:33.535231+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:34.535413+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:35.535697+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:36.536086+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:37.536284+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:38.536477+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:39.536600+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:40.536785+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:41.536969+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:42.537178+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:43.537468+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:44.537972+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:45.538593+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:46.538724+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:47.538906+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:48.539594+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:49.539874+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:50.540230+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:51.540745+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:52.541099+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:53.541324+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:54.541599+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:55.541912+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:56.542579+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:57.543292+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:58.543727+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:59.544179+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:00.544662+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:01.545081+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:02.545282+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:03.545675+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:04.546539+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:05.547455+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:06.547684+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:07.548028+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:08.548179+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:09.548369+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:10.548525+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:11.548646+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:12.549118+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:13.549399+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:14.549584+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:15.549843+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:16.550030+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:17.551414+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:18.551594+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:19.552395+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:20.552533+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:21.552937+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:22.554280+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:23.554482+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:24.555466+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:25.555703+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:26.555859+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:27.556625+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:28.556820+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:29.557079+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:30.557422+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5556 writes, 23K keys, 5556 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5556 writes, 855 syncs, 6.50 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:31.557688+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:32.557885+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:33.558027+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:34.558204+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:35.558567+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:36.558876+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:37.559096+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:38.559414+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:39.559690+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:40.560046+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:41.560237+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:42.560416+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:43.560592+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:44.560750+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:45.560948+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:46.561098+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:47.561281+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:48.561549+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:49.561846+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:50.562033+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:51.562169+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:52.562360+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:53.562492+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:54.562622+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:55.562803+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:56.562925+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:57.563110+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:58.563296+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:59.563432+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:00.563649+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:01.563813+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:02.563986+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:03.564147+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:04.564425+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:05.564659+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:06.564888+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:07.565121+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:08.565486+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 rsyslogd[1004]: imjournal from <np0005480869:ceph-osd>: begin to drop messages due to rate-limiting
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:09.565863+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:10.566141+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:11.566459+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:12.566801+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:13.566978+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:14.567226+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:15.567542+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:16.567912+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:17.568220+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:18.568544+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:19.568889+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:20.569145+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:21.569544+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:22.569682+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:23.569871+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:24.570068+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:25.570434+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.770324707s of 600.150451660s, submitted: 90
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:26.570624+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:27.570786+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:28.570939+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:29.571097+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:30.571535+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:31.571823+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:32.572009+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:33.572200+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:34.572388+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:35.572621+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:36.572772+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:37.573041+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:38.573440+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:39.573708+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:40.573983+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:41.574118+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:42.574420+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:43.574561+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:44.574708+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:45.574905+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:46.575123+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:47.575249+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:48.575866+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:49.576023+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:50.576224+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:51.576596+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:52.577025+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:53.578675+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:54.579793+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:55.580996+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:56.581145+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:57.583042+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:58.583218+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:59.583443+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:00.584018+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:01.584177+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:02.584357+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:03.584508+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:04.584641+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:05.585088+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:06.585426+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:07.585743+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:08.585983+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:09.586234+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:10.586425+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:11.586604+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:12.586789+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:13.586978+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:14.587308+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:15.587834+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:16.588112+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:17.588415+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:18.588668+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:19.588888+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:20.632587+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:21.632809+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:22.632993+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:23.633225+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:24.633424+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:25.633565+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:26.633714+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:27.633848+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:28.634002+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:29.634179+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:30.634370+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:31.634507+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:32.634686+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:33.634883+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:34.635053+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:35.635214+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:36.635452+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:37.635656+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:38.635798+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:39.635902+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:40.636064+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:41.636205+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:42.636373+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:43.636514+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:44.636676+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:45.636928+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:46.637137+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:47.637308+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:48.637477+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:49.637659+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:50.637841+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:51.637989+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:52.638156+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:53.638321+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:54.638547+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:55.638710+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:56.638884+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:57.642158+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:58.643420+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:59.643723+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:00.644616+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:01.645202+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:02.645887+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:03.646386+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:04.646572+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:05.646810+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:06.647027+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:07.647161+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:08.647285+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:09.647396+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:10.647651+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:11.647813+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:12.648034+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:13.648166+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:14.648304+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:15.648594+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:16.648766+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:17.648947+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:18.649150+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:19.649450+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:20.649662+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:21.649847+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:22.650014+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:23.650226+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:24.650427+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:25.650802+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:26.650975+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:27.651223+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:28.651418+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:29.651631+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:30.651816+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:31.651965+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:32.652148+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:33.652368+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:34.652536+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:35.652719+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:36.652988+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:37.653150+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:38.653374+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:39.653549+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:40.653718+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:41.653898+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:42.654081+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:43.654308+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:44.654474+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:45.654654+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:46.654810+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:47.654967+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:48.655110+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:49.655288+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:50.655473+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:51.655635+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:52.655790+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:53.656029+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:54.656214+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:55.656418+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:56.656586+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:57.656835+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:58.657007+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:59.657253+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:00.657422+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:01.657601+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:02.657780+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:03.659459+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:04.659661+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:05.660543+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:06.660692+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:07.661659+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:08.661846+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:09.662630+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:10.662833+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:11.663090+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:12.663259+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:13.663511+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:14.663660+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:15.663911+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:16.664604+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:17.664725+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:18.664846+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:19.664994+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:20.665183+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:21.665313+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:22.665610+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:23.665744+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:24.666058+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:25.666368+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:26.666610+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:27.666776+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:28.667013+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:29.667256+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:30.667533+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:31.667723+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:32.667966+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:33.668145+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:34.668369+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:35.668622+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:36.668804+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:37.668987+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:38.669165+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:39.669388+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:40.669581+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:41.669754+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:42.669923+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:43.670047+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:44.670178+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:45.670444+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:46.670584+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:47.670756+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:48.670948+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:49.671122+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:50.671299+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:51.671440+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:52.671601+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:53.671784+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:54.671954+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:55.672106+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:56.672250+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:57.672446+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:58.672620+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:59.672841+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:00.673024+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:01.673238+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:02.673447+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:03.673666+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:04.673903+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:05.674176+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:06.674403+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:07.674597+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:08.674769+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:09.674930+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:10.675077+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:11.675248+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:12.675409+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:13.675552+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:14.675746+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:15.675975+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:16.676165+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:17.676391+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:18.676607+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:19.676823+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:20.677038+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:21.677195+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:22.677389+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1564672 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:23.677563+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1564672 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:24.677682+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1564672 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:25.678133+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 239.392028809s of 239.705917358s, submitted: 90
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:26.678281+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 800899 data_alloc: 218103808 data_used: 167936
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 115 handle_osd_map epochs [115,116], i have 115, src has [1,116]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f51e400
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 17235968 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:27.678384+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcac3000/0x0/0x4ffc00000, data 0xb04c4/0x159000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 116 ms_handle_reset con 0x56328f51e400 session 0x56328eb29860
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 17227776 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328e23cc00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 116 handle_osd_map epochs [117,117], i have 116, src has [1,117]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:28.678502+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 17072128 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:29.678671+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 118 ms_handle_reset con 0x56328e23cc00 session 0x56328eb29a40
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 17063936 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:30.678880+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fb64d000/0x0/0x4ffc00000, data 0x1523bf6/0x15cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fb64d000/0x0/0x4ffc00000, data 0x1523bf6/0x15cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 17047552 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:31.679052+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954436 data_alloc: 218103808 data_used: 176128
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 17047552 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:32.679244+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 17047552 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:33.679433+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fb64d000/0x0/0x4ffc00000, data 0x1523bf6/0x15cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 17047552 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:34.679624+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 17047552 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:35.680078+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 17047552 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:36.680250+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954436 data_alloc: 218103808 data_used: 176128
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.992850304s of 11.152420044s, submitted: 32
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:37.680445+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:38.680624+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:39.680758+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:40.680948+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:41.681165+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956738 data_alloc: 218103808 data_used: 176128
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:42.681275+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:43.681431+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:44.681590+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:45.681818+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:46.682015+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956738 data_alloc: 218103808 data_used: 176128
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:47.682137+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:48.682305+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:49.682534+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:50.682666+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:51.682827+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956738 data_alloc: 218103808 data_used: 176128
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:52.682943+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:53.683061+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:54.683213+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:55.683391+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:56.683538+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956738 data_alloc: 218103808 data_used: 176128
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:57.683707+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:58.683866+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:59.684034+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:00.684169+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:01.684366+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956738 data_alloc: 218103808 data_used: 176128
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 17022976 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:02.684527+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 17022976 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:03.684707+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328ea89400
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.131505966s of 27.142993927s, submitted: 13
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 120 ms_handle_reset con 0x56328ea89400 session 0x56328ec57680
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 16998400 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:04.684862+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328e23cc00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 16883712 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 121 ms_handle_reset con 0x56328e23cc00 session 0x5632912e41e0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:05.685033+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f51e400
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 15826944 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 122 ms_handle_reset con 0x56328f51e400 session 0x5632912e5a40
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:06.685204+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977948 data_alloc: 218103808 data_used: 184320
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f51f000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 15777792 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 123 ms_handle_reset con 0x56328f51f000 session 0x5632912e5e00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:07.685414+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fb639000/0x0/0x4ffc00000, data 0x152ccd4/0x15e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f807800
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 15769600 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 124 ms_handle_reset con 0x56328f807800 session 0x56328f2a14a0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:08.685585+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328ea89000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 15769600 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 125 ms_handle_reset con 0x56328ea89000 session 0x56329129f4a0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:09.685804+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 15933440 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:10.685957+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 15933440 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:11.686285+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993088 data_alloc: 218103808 data_used: 188416
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fb62f000/0x0/0x4ffc00000, data 0x153098d/0x15ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 15933440 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:12.686487+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328e23cc00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f51e400
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 15925248 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:13.686732+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 126 ms_handle_reset con 0x56328e23cc00 session 0x56328ec5c960
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 126 ms_handle_reset con 0x56328f51e400 session 0x56328eb285a0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb62d000/0x0/0x4ffc00000, data 0x153253b/0x15f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 15892480 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f51f000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:14.686872+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.163709641s of 10.455937386s, submitted: 71
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fb630000/0x0/0x4ffc00000, data 0x1531f9f/0x15ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 127 ms_handle_reset con 0x56328f51f000 session 0x56328f8c83c0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 15867904 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f807800
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:15.687057+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563290946000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fb630000/0x0/0x4ffc00000, data 0x1531f9f/0x15ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 128 ms_handle_reset con 0x56328f807800 session 0x56328e6cd4a0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 128 ms_handle_reset con 0x563290946000 session 0x5632912e5a40
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 15818752 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:16.687284+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328e23cc00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f51f000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006227 data_alloc: 218103808 data_used: 212992
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f807800
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 129 ms_handle_reset con 0x56328f807800 session 0x56328fad45a0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 129 ms_handle_reset con 0x56328e23cc00 session 0x56328ea812c0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 24215552 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:17.687448+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f327000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d1400
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 21643264 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 130 ms_handle_reset con 0x56328f327000 session 0x56328eb29a40
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:18.687619+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 131 ms_handle_reset con 0x5632912d1400 session 0x56328f9b0780
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 131 ms_handle_reset con 0x56328f51f000 session 0x56328f2a0d20
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393c00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 21446656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 131 ms_handle_reset con 0x563291393c00 session 0x56328f579a40
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 131 ms_handle_reset con 0x5632912d0000 session 0x56328ea7b4a0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393800
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 131 ms_handle_reset con 0x563291393800 session 0x5632912e5a40
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:19.687786+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 131 heartbeat osd_stat(store_statfs(0x4f8e23000/0x0/0x4ffc00000, data 0x3d3a673/0x3dfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328e23cc00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f327000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 21405696 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:20.687954+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 133 ms_handle_reset con 0x56328f327000 session 0x5632908f01e0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 20455424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 133 ms_handle_reset con 0x56328e23cc00 session 0x5632913a8000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:21.688115+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f327000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023189 data_alloc: 218103808 data_used: 233472
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 134 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 134 ms_handle_reset con 0x56328f327000 session 0x56328ec650e0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 20389888 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:22.698765+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 20381696 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 135 ms_handle_reset con 0x5632912d0000 session 0x5632908f01e0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:23.698932+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 20381696 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:24.699088+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.757702827s of 10.382859230s, submitted: 305
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 20373504 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:25.699293+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x1542b8e/0x1605000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393800
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 20299776 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 137 ms_handle_reset con 0x563291393800 session 0x56328e72ba40
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:26.699568+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043277 data_alloc: 218103808 data_used: 262144
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 20291584 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:27.699788+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 20291584 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:28.700030+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fb611000/0x0/0x4ffc00000, data 0x1544bad/0x160b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393c00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 20258816 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:29.700227+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 138 ms_handle_reset con 0x563291393c00 session 0x563290a54d20
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb60e000/0x0/0x4ffc00000, data 0x1546796/0x160f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 20242432 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f807800
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:30.700368+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 140 ms_handle_reset con 0x56328f807800 session 0x56328f5785a0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 20250624 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f327000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 140 ms_handle_reset con 0x56328f327000 session 0x56328f9b0b40
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:31.700497+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393800
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053869 data_alloc: 218103808 data_used: 290816
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 20234240 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:32.700646+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fb609000/0x0/0x4ffc00000, data 0x1549e3a/0x1614000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 142 ms_handle_reset con 0x563291393800 session 0x5632912e5a40
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 142 ms_handle_reset con 0x5632912d0000 session 0x56328f8c9c20
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393c00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 142 ms_handle_reset con 0x563291393c00 session 0x5632908e8d20
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 19136512 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:33.700796+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fb601000/0x0/0x4ffc00000, data 0x154d5c0/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 19095552 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:34.700987+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 19095552 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:35.701170+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 19095552 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:36.701368+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062658 data_alloc: 218103808 data_used: 290816
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 19095552 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:37.701542+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fb601000/0x0/0x4ffc00000, data 0x154d5c0/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 19095552 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:38.701662+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 19095552 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:39.701790+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fb601000/0x0/0x4ffc00000, data 0x154d5c0/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.699498177s of 15.214076996s, submitted: 135
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 19079168 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:40.701951+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb600000/0x0/0x4ffc00000, data 0x154f05b/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 19079168 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:41.702125+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064288 data_alloc: 218103808 data_used: 290816
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 19079168 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:42.702271+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632909ea400
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 19079168 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:43.702401+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 143 handle_osd_map epochs [144,144], i have 144, src has [1,144]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb600000/0x0/0x4ffc00000, data 0x154f05b/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 19070976 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:44.702564+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 145 ms_handle_reset con 0x5632909ea400 session 0x56328f8a5a40
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f327000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 19062784 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:45.702721+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 19062784 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:46.702852+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074589 data_alloc: 218103808 data_used: 294912
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fb5f8000/0x0/0x4ffc00000, data 0x15527bc/0x1624000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 19062784 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:47.702968+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fb5f8000/0x0/0x4ffc00000, data 0x15527bc/0x1624000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 19062784 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:48.703156+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393800
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 19152896 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:49.703317+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393c00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 146 ms_handle_reset con 0x563291393800 session 0x5632908a2f00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 19144704 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:50.703490+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.668182373s of 10.810811996s, submitted: 66
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 147 ms_handle_reset con 0x563291393c00 session 0x5632908c8f00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 19111936 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:51.703604+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086092 data_alloc: 218103808 data_used: 303104
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912ae000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 147 ms_handle_reset con 0x5632912ae000 session 0x56328ec57e00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 18006016 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:52.703752+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912ae400
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fb5f1000/0x0/0x4ffc00000, data 0x1555f3a/0x162d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 147 ms_handle_reset con 0x5632912ae400 session 0x5632913a8d20
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912ae800
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 19062784 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:53.703897+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 148 ms_handle_reset con 0x5632912ae800 session 0x5632913a83c0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 19046400 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:54.704033+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912ae000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 19046400 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:55.704216+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fb5ef000/0x0/0x4ffc00000, data 0x1557aed/0x162f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 149 ms_handle_reset con 0x5632912ae000 session 0x563290a545a0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 19038208 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:56.704419+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912ae400
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091315 data_alloc: 218103808 data_used: 323584
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 19038208 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:57.704694+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 149 ms_handle_reset con 0x5632912ae400 session 0x56328f2a0d20
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 19038208 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:58.704861+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 19038208 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:59.705008+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fb5e8000/0x0/0x4ffc00000, data 0x155b270/0x1634000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 19030016 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:00.705476+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 19005440 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:01.705589+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393800
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x563291393800 session 0x56329124b4a0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393c00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x563291393c00 session 0x56329124ab40
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912afc00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x5632912afc00 session 0x56328f9b10e0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097759 data_alloc: 218103808 data_used: 323584
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912afc00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x5632912afc00 session 0x56328ea7b4a0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912ae000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912ae400
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.902799606s of 11.212671280s, submitted: 101
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x5632912ae400 session 0x56328eb294a0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x5632912ae000 session 0x56328ec5cf00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0400
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x5632912d0400 session 0x56328ea7da40
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393400
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x563291393400 session 0x5632913ac960
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0400
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x5632912d0400 session 0x5632913ad4a0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d1400
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x5632912d1400 session 0x5632913ac3c0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 18989056 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:02.705773+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:03.705918+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 18989056 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0c00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x5632912d0c00 session 0x5632913ac1e0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:04.706056+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 18956288 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393400
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393800
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fb5e5000/0x0/0x4ffc00000, data 0x155cd5c/0x1639000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:05.706313+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 18948096 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:06.706548+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 18948096 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101583 data_alloc: 218103808 data_used: 335872
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:07.706656+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 18948096 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393c00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x563291393c00 session 0x5632912e5a40
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56329054ec00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 152 ms_handle_reset con 0x56329054ec00 session 0x56328f5785a0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0400
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0c00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 152 ms_handle_reset con 0x5632912d0c00 session 0x56328f8c83c0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 152 ms_handle_reset con 0x5632912d0400 session 0x5632908f1680
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:08.706757+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d1400
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 18948096 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 152 ms_handle_reset con 0x5632912d1400 session 0x56328ec650e0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393c00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 153 ms_handle_reset con 0x563291393c00 session 0x5632908f01e0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:09.706892+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 18939904 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb5dc000/0x0/0x4ffc00000, data 0x15608aa/0x1640000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:10.707029+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 18907136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:11.707160+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 18907136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115013 data_alloc: 218103808 data_used: 348160
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x1562443/0x1643000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56329054e000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:12.707320+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 18890752 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.489326477s of 10.598018646s, submitted: 33
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 154 ms_handle_reset con 0x56329054e000 session 0x5632913ad4a0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56329054e000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:13.707560+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 18866176 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 155 ms_handle_reset con 0x56329054e000 session 0x56328ec5cf00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:14.707685+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 18857984 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 155 ms_handle_reset con 0x563291393400 session 0x56328f8c9e00
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 155 ms_handle_reset con 0x563291393800 session 0x5632906bfc20
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0400
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:15.707924+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 18857984 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x1565691/0x1646000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [1])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 157 ms_handle_reset con 0x5632912d0400 session 0x56328f2a0d20
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:16.708454+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 18825216 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118681 data_alloc: 218103808 data_used: 344064
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:17.708610+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 18825216 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:18.709321+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 18825216 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:19.709863+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 18825216 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 158 ms_handle_reset con 0x56328f327000 session 0x56328ec5cb40
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 158 ms_handle_reset con 0x5632912d0000 session 0x56328f9b0780
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x1567220/0x1648000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f327000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:20.710008+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 158 ms_handle_reset con 0x56328f327000 session 0x56328f8c7a40
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 18792448 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:21.710520+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 18792448 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120799 data_alloc: 218103808 data_used: 344064
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56329054e000
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 158 ms_handle_reset con 0x56329054e000 session 0x56329129e1e0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0400
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:22.710687+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 18767872 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:23.711298+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 18767872 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.580225945s of 10.899756432s, submitted: 155
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 159 ms_handle_reset con 0x5632912d0400 session 0x56329129e3c0
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:24.711611+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 18759680 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:25.711865+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x156a831/0x164d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 18759680 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:26.712066+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 18759680 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122643 data_alloc: 218103808 data_used: 344064
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:27.712257+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 18759680 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:28.712613+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 18759680 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:29.712905+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 18759680 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:30.713060+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 18759680 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:31.713302+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x156a831/0x164d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:32.713512+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:33.713703+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:34.713982+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:35.714262+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:36.714500+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:37.714729+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:38.714923+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:39.715116+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:40.715261+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:41.715377+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:42.715517+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:43.715651+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:44.715797+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:45.715996+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:46.716215+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:47.716386+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:48.716534+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:49.716649+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:50.716804+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:51.716965+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:52.717118+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:53.717263+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:54.717412+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:55.717605+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:56.717787+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:57.717901+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:58.718064+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:59.718204+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:00.718403+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:01.718563+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:02.718702+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:03.718844+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:04.719022+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:05.719217+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:06.719392+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:07.719546+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:08.719768+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:09.719949+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:10.720123+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:11.720287+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:12.720465+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:13.720612+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:14.720800+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:15.720997+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:16.721164+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:17.721381+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:18.721559+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:19.721733+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:20.721960+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:21.722436+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:22.722682+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:23.722996+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:24.723189+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:25.723462+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:26.723674+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 17694720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:27.723851+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 17694720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:28.724028+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 17694720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:29.724249+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 17694720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:30.724422+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 17694720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:31.724563+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 17694720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:07 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:07 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:32.724735+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 17694720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:33.724876+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 17694720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:34.725015+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: do_command 'config diff' '{prefix=config diff}'
Oct 11 04:55:07 compute-0 ceph-osd[89565]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 11 04:55:07 compute-0 ceph-osd[89565]: do_command 'config show' '{prefix=config show}'
Oct 11 04:55:07 compute-0 ceph-osd[89565]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 17448960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: do_command 'counter dump' '{prefix=counter dump}'
Oct 11 04:55:07 compute-0 ceph-osd[89565]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 11 04:55:07 compute-0 ceph-osd[89565]: do_command 'counter schema' '{prefix=counter schema}'
Oct 11 04:55:07 compute-0 ceph-osd[89565]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:35.725160+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 16941056 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 04:55:07 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:36.725358+0000)
Oct 11 04:55:07 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 04:55:07 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 16900096 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:07 compute-0 ceph-osd[89565]: do_command 'log dump' '{prefix=log dump}'
Oct 11 04:55:07 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 11 04:55:07 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3711615258' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 11 04:55:07 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14807 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:07 compute-0 ceph-mon[74243]: from='client.14799 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:07 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3709712353' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 11 04:55:07 compute-0 ceph-mon[74243]: from='client.14803 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:07 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3711615258' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 11 04:55:07 compute-0 ceph-mon[74243]: from='client.14807 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:08 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 11 04:55:08 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/732662433' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 11 04:55:08 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14811 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:08 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14815 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:08 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 11 04:55:08 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2112180147' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 04:55:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:08 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14817 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:08 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/732662433' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 11 04:55:08 compute-0 ceph-mon[74243]: from='client.14811 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:08 compute-0 ceph-mon[74243]: from='client.14815 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:08 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2112180147' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 04:55:08 compute-0 ceph-mon[74243]: pgmap v1027: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:08 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 11 04:55:08 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1651656106' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 11 04:55:09 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14821 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:09 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 11 04:55:09 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3630076753' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 11 04:55:09 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:55:09 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4763 writes, 21K keys, 4763 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4763 writes, 4763 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1468 writes, 6644 keys, 1468 commit groups, 1.0 writes per commit group, ingest: 9.31 MB, 0.02 MB/s
                                           Interval WAL: 1468 writes, 1468 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    124.8      0.19              0.09        12    0.015       0      0       0.0       0.0
                                             L6      1/0    7.26 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    181.7    150.3      0.51              0.29        11    0.046     47K   5691       0.0       0.0
                                            Sum      1/0    7.26 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3    133.0    143.5      0.69              0.39        23    0.030     47K   5691       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.8    140.1    141.3      0.30              0.19        10    0.030     23K   2490       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    181.7    150.3      0.51              0.29        11    0.046     47K   5691       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    127.5      0.18              0.09        11    0.016       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.023, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 0.7 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563d484a31f0#2 capacity: 304.00 MB usage: 9.02 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000112 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(692,8.63 MB,2.83899%) FilterBlock(24,142.30 KB,0.0457111%) IndexBlock(24,261.38 KB,0.0839635%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 04:55:09 compute-0 ceph-mon[74243]: from='client.14817 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:09 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1651656106' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 11 04:55:09 compute-0 ceph-mon[74243]: from='client.14821 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:09 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3630076753' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 11 04:55:10 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14829 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:10 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T04:55:10.059+0000 7fe5f0a93640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 04:55:10 compute-0 ceph-mgr[74542]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 04:55:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 11 04:55:10 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2179835064' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 11 04:55:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 11 04:55:10 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3543522671' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 11 04:55:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 11 04:55:10 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2943307276' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 11 04:55:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:11 compute-0 ceph-mon[74243]: from='client.14829 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:11 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2179835064' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 11 04:55:11 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3543522671' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 11 04:55:11 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2943307276' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 11 04:55:11 compute-0 ceph-mon[74243]: pgmap v1028: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:55:11.020 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:55:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:55:11.020 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:55:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:55:11.021 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:55:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 11 04:55:11 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2545138310' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 11 04:55:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 11 04:55:11 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/705627932' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 11 04:55:11 compute-0 crontab[272231]: (root) LIST (root)
Oct 11 04:55:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 11 04:55:11 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3270935877' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 11 04:55:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 11 04:55:11 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2249552361' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 11 04:55:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 11 04:55:11 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/745783834' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 11 04:55:11 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 11 04:55:11 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4236166018' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 61 handle_osd_map epochs [62,62], i have 61, src has [1,62]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033574 6 0.000152
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034459 6 0.000336
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 1671168 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.071156 3 0.000150
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.071221 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000238 1 0.000152
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.5( v 38'39 (0'0,38'39] lb MIN local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 DELETING pi=[53,61)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.066169 2 0.000358
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.5( v 38'39 (0'0,38'39] lb MIN local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.066531 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.5( v 38'39 (0'0,38'39] lb MIN local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started 1.171433 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.208644 3 0.000117
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.208710 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000163 1 0.000120
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.d( v 38'39 (0'0,38'39] lb MIN local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 DELETING pi=[53,61)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.018837 2 0.000300
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.d( v 38'39 (0'0,38'39] lb MIN local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.019094 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 62 pg[6.d( v 38'39 (0'0,38'39] lb MIN local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=-1 lpr=61 pi=[53,61)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started 1.262409 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:55.931065+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:25.000522+0000 osd.1 (osd.1) 24 : cluster [DBG] 3.10 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:25.014496+0000 osd.1 (osd.1) 25 : cluster [DBG] 3.10 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 25) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:25.000522+0000 osd.1 (osd.1) 24 : cluster [DBG] 3.10 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:25.014496+0000 osd.1 (osd.1) 25 : cluster [DBG] 3.10 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 1613824 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 642154 data_alloc: 218103808 data_used: 106496
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:56.931303+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 1581056 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:57.931450+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 1581056 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fcaf0000/0x0/0x4ffc00000, data 0xc449e/0x12c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 62 handle_osd_map epochs [63,64], i have 62, src has [1,64]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 62 handle_osd_map epochs [63,64], i have 64, src has [1,64]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 22.558654 39 0.000363
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 22.568833 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 22.569016 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 22.569083 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.441020012s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 101.968696594s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 22.549119 39 0.000137
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 22.566813 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 22.566911 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 22.566947 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.450304985s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 101.978370667s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.450239182s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978370667s@ mbc={}] exit Reset 0.000104 2 0.000180
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.450239182s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978370667s@ mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.450239182s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978370667s@ mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.450239182s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978370667s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.450239182s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978370667s@ mbc={}] exit Start 0.000009 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.450239182s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978370667s@ mbc={}] enter Started/Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.440475464s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.968696594s@ mbc={}] exit Reset 0.000631 2 0.000692
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 22.549224 39 0.000116
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 22.566251 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 22.566357 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.440475464s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.968696594s@ mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.440475464s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.968696594s@ mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 22.566388 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.440475464s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.968696594s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.450131416s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 101.978530884s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.440475464s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.968696594s@ mbc={}] exit Start 0.000077 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.440475464s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.968696594s@ mbc={}] enter Started/Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 22.549355 39 0.000155
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 22.565289 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 22.565374 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 22.565406 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 63 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.450095177s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 101.978752136s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.450055122s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978752136s@ mbc={}] exit Reset 0.000076 2 0.000110
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.450055122s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978752136s@ mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.450055122s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978752136s@ mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.450055122s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978752136s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.450055122s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978752136s@ mbc={}] exit Start 0.000007 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.450055122s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978752136s@ mbc={}] enter Started/Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.449581146s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978530884s@ mbc={}] exit Reset 0.000593 2 0.000704
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.449581146s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978530884s@ mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.449581146s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978530884s@ mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.449581146s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978530884s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.449581146s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978530884s@ mbc={}] exit Start 0.000013 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 64 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=9.449581146s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.978530884s@ mbc={}] enter Started/Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:58.931654+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 1458176 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.905624 3 0.000037
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.905357 3 0.000099
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.905679 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.905411 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.905872 3 0.000363
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.906124 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000107 1 0.000126
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000008 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000138 1 0.000187
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000166 1 0.000222
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000011 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000047
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000017 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000029 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000058
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000033 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.906608 3 0.000065
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.906640 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63) [2] r=-1 lpr=63 pi=[49,63)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000043 1 0.000221
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000008 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000031 1 0.000044
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000285 1 0.000094
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000175 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000011 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 65 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:59.931874+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:29.068606+0000 osd.1 (osd.1) 26 : cluster [DBG] 3.13 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:29.082942+0000 osd.1 (osd.1) 27 : cluster [DBG] 3.13 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1466368 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 65 handle_osd_map epochs [66,66], i have 66, src has [1,66]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 27) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:29.068606+0000 osd.1 (osd.1) 26 : cluster [DBG] 3.13 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:29.082942+0000 osd.1 (osd.1) 27 : cluster [DBG] 3.13 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006795 4 0.000258
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.007471 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007323 4 0.000087
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007494 4 0.000074
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007105 4 0.000059
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.007319 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.007568 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.007608 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 24.465019 47 0.000425
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 24.482265 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 24.482388 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 24.482462 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=15.534359932s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 109.978454590s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=15.534199715s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 109.978454590s@ mbc={}] exit Reset 0.000221 1 0.000353
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=15.534199715s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 109.978454590s@ mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=15.534199715s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 109.978454590s@ mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=15.534199715s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 109.978454590s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=15.534199715s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 109.978454590s@ mbc={}] exit Start 0.000032 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=15.534199715s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 109.978454590s@ mbc={}] enter Started/Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 24.465163 47 0.000166
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 24.481260 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 24.481350 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 24.481396 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=15.534417152s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 109.978889465s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=15.534354210s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 109.978889465s@ mbc={}] exit Reset 0.000113 1 0.000172
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=15.534354210s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 109.978889465s@ mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=15.534354210s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 109.978889465s@ mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=15.534354210s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 109.978889465s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=15.534354210s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 109.978889465s@ mbc={}] exit Start 0.000031 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=15.534354210s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 109.978889465s@ mbc={}] enter Started/Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.014430 5 0.000685
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000368 1 0.000093
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000504 1 0.000056
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.025631 5 0.000845
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.025759 5 0.000879
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.025392 5 0.001413
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.049916 1 0.000069
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.060632 2 0.000067
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.059221 1 0.000060
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.045396 2 0.000134
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.154658 1 0.000035
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.052129 1 0.000048
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.051575 2 0.000093
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.258422 1 0.000037
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.050623 1 0.000368
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.051348 2 0.000113
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 66 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:00.932086+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1466368 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 654662 data_alloc: 218103808 data_used: 106496
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 66 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.832853 1 0.000202
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.013872 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.021617 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.021656 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.012018204s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 110.468261719s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.011908531s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468261719s@ mbc={}] exit Reset 0.000163 1 0.000411
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.011908531s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468261719s@ mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.011908531s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468261719s@ mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.011908531s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468261719s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.011908531s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468261719s@ mbc={}] exit Start 0.000117 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.011908531s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468261719s@ mbc={}] enter Started/Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.730288 1 0.000240
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.015105 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.022453 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.022490 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.010913849s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 110.468299866s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.010800362s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468299866s@ mbc={}] exit Reset 0.000160 1 0.000261
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.010800362s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468299866s@ mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.010800362s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468299866s@ mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.010800362s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468299866s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.010800362s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468299866s@ mbc={}] exit Start 0.000029 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.010800362s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468299866s@ mbc={}] enter Started/Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013395 3 0.000126
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.939433 1 0.000227
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.013591 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.015676 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.023445 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.023496 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013524 3 0.000107
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=14.998461723s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 110.456581116s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.013605 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=-1 lpr=66 pi=[49,66)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=14.998378754s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.456581116s@ mbc={}] exit Reset 0.000136 1 0.000345
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000059 1 0.000094
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=14.998378754s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.456581116s@ mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=14.998378754s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.456581116s@ mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=14.998378754s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.456581116s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000011 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=14.998378754s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.456581116s@ mbc={}] exit Start 0.000021 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=14.998378754s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.456581116s@ mbc={}] enter Started/Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.629306 1 0.000168
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.016374 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.023887 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.023940 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.009805679s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 110.468399048s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000792 1 0.000975
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.009521484s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468399048s@ mbc={}] exit Reset 0.000343 1 0.000417
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.009521484s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468399048s@ mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.009521484s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468399048s@ mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.009521484s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468399048s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.009521484s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468399048s@ mbc={}] exit Start 0.000022 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.009521484s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.468399048s@ mbc={}] enter Started/Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000119 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.008245 2 0.000068
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000060 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.007573 2 0.000601
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000191 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000043 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 67 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:01.932234+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 1433600 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 67 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008622 3 0.000429
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.016639 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 68 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.009709 3 0.000178
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.018133 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.011653 5 0.001091
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000125 1 0.000112
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.012411 5 0.001237
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000629 1 0.000058
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036810 7 0.000301
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035789 7 0.000238
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.038384 7 0.000268
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.039098 7 0.000235
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.043696 2 0.000075
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.044228 1 0.000115
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000959 1 0.000194
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.075950 2 0.000082
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.116543 1 0.000094
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.116753 1 0.000067
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.113425 1 0.000052
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.113766 1 0.000057
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.6( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.16( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.074279 2 0.000446
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.16( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.190901 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.16( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.227903 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.e( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.118542 2 0.000624
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.e( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.235398 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.e( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.271273 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.1e( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.162295 2 0.000382
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.1e( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.275917 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.1e( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.314376 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.6( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.206560 2 0.000155
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.6( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.320401 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 68 pg[9.6( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.359576 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:02.932386+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1466368 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.905262947s of 10.208797455s, submitted: 92
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.886811 1 0.000113
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.020843 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.037550 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.037917 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.990410805s) [2] async=[2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 112.487251282s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.964124 1 0.000327
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.990229607s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.487251282s@ mbc={}] exit Reset 0.000236 1 0.000329
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.020636 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.038794 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.990229607s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.487251282s@ mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.038829 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.990229607s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.487251282s@ mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.990229607s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.487251282s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.990164757s) [2] async=[2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 112.487251282s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.990229607s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.487251282s@ mbc={}] exit Start 0.000039 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.990229607s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.487251282s@ mbc={}] enter Started/Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.990109444s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.487251282s@ mbc={}] exit Reset 0.000080 1 0.000176
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.990109444s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.487251282s@ mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.990109444s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.487251282s@ mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.990109444s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.487251282s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.990109444s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.487251282s@ mbc={}] exit Start 0.000015 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 69 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.990109444s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.487251282s@ mbc={}] enter Started/Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 69 handle_osd_map epochs [69,69], i have 69, src has [1,69]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:03.932508+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 1 last_log 28 sent 27 num 1 unsent 1 sending 1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:33.923470+0000 osd.1 (osd.1) 28 : cluster [DBG] 3.14 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 69 heartbeat osd_stat(store_statfs(0x4fcadd000/0x0/0x4ffc00000, data 0xd07be/0x13f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 1433600 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 28) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:33.923470+0000 osd.1 (osd.1) 28 : cluster [DBG] 3.14 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 70 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.500955 6 0.000128
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 70 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 70 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 70 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000156 1 0.000037
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 70 pg[9.18( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: not registered w/ OSD
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 70 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.501234 6 0.000248
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 70 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 70 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 70 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000350 1 0.000088
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 70 pg[9.8( v 41'581 (0'0,41'581] local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: not registered w/ OSD
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:04.932686+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 3 last_log 31 sent 28 num 3 unsent 3 sending 3
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:33.937579+0000 osd.1 (osd.1) 29 : cluster [DBG] 3.14 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:34.898590+0000 osd.1 (osd.1) 30 : cluster [DBG] 3.19 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:34.912671+0000 osd.1 (osd.1) 31 : cluster [DBG] 3.19 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 70 pg[9.18( v 41'581 (0'0,41'581] lb MIN local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=-1 lpr=69 DELETING pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.061467 3 0.000130
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 70 pg[9.18( v 41'581 (0'0,41'581] lb MIN local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.061682 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 70 pg[9.18( v 41'581 (0'0,41'581] lb MIN local-lis/les=67/68 n=6 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.562696 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: not registered w/ OSD
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 70 pg[9.8( v 41'581 (0'0,41'581] lb MIN local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=-1 lpr=69 DELETING pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.127484 3 0.000139
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 70 pg[9.8( v 41'581 (0'0,41'581] lb MIN local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.127892 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 70 pg[9.8( v 41'581 (0'0,41'581] lb MIN local-lis/les=67/68 n=7 ec=49/34 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=-1 lpr=69 pi=[49,69)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.629229 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: not registered w/ OSD
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 1425408 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 31) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:33.937579+0000 osd.1 (osd.1) 29 : cluster [DBG] 3.14 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:34.898590+0000 osd.1 (osd.1) 30 : cluster [DBG] 3.19 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:34.912671+0000 osd.1 (osd.1) 31 : cluster [DBG] 3.19 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:05.932843+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 70 heartbeat osd_stat(store_statfs(0x4fcadb000/0x0/0x4ffc00000, data 0xd2115/0x140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 1425408 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 603030 data_alloc: 218103808 data_used: 94208
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:06.933020+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 1417216 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:07.933143+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 1417216 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:08.933302+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.1a deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.1a deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1409024 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 71 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=38'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 25.399669 50 0.000149
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 71 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 25.420471 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 71 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 26.403209 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 71 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 26.403232 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 71 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 71 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71 pruub=14.599646568s) [0] r=-1 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 118.169853210s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 71 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71 pruub=14.599572182s) [0] r=-1 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.169853210s@ mbc={}] exit Reset 0.000119 1 0.000190
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 71 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71 pruub=14.599572182s) [0] r=-1 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.169853210s@ mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 71 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71 pruub=14.599572182s) [0] r=-1 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.169853210s@ mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 71 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71 pruub=14.599572182s) [0] r=-1 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.169853210s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 71 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71 pruub=14.599572182s) [0] r=-1 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.169853210s@ mbc={}] exit Start 0.000016 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 71 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71 pruub=14.599572182s) [0] r=-1 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.169853210s@ mbc={}] enter Started/Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 71 handle_osd_map epochs [71,71], i have 71, src has [1,71]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 71 ms_handle_reset con 0x564466465c00 session 0x5644662d7e00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:09.933529+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:38.940881+0000 osd.1 (osd.1) 32 : cluster [DBG] 3.1a deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:38.955135+0000 osd.1 (osd.1) 33 : cluster [DBG] 3.1a deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 671744 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 33) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:38.940881+0000 osd.1 (osd.1) 32 : cluster [DBG] 3.1a deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:38.955135+0000 osd.1 (osd.1) 33 : cluster [DBG] 3.1a deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 71 ms_handle_reset con 0x564466465000 session 0x5644656be960
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 71 heartbeat osd_stat(store_statfs(0x4fcada000/0x0/0x4ffc00000, data 0xd3e2d/0x143000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=-1 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.257340 6 0.000214
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=-1 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=-1 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=55) [1] r=0 lpr=55 crt=38'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 24.643297 48 0.000683
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=55) [1] r=0 lpr=55 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 24.650193 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=55) [1] r=0 lpr=55 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 25.231379 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=55) [1] r=0 lpr=55 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 25.231512 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=55) [1] r=0 lpr=55 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72 pruub=15.356689453s) [0] r=-1 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 120.185195923s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72 pruub=15.356639862s) [0] r=-1 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.185195923s@ mbc={}] exit Reset 0.000074 1 0.000107
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72 pruub=15.356639862s) [0] r=-1 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.185195923s@ mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72 pruub=15.356639862s) [0] r=-1 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.185195923s@ mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72 pruub=15.356639862s) [0] r=-1 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.185195923s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72 pruub=15.356639862s) [0] r=-1 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.185195923s@ mbc={}] exit Start 0.000009 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72 pruub=15.356639862s) [0] r=-1 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.185195923s@ mbc={}] enter Started/Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=-1 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001444 2 0.000063
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=-1 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.9( v 38'39 (0'0,38'39] lb MIN local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=-1 lpr=71 DELETING pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.009904 1 0.000088
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.9( v 38'39 (0'0,38'39] lb MIN local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=-1 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.011410 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 72 pg[6.9( v 38'39 (0'0,38'39] lb MIN local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=-1 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.268815 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:10.933726+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 1802240 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 608576 data_alloc: 218103808 data_used: 106496
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:11.933948+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 73 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=-1 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.200895 6 0.000104
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 73 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=-1 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 73 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=-1 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 73 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=-1 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001476 2 0.000148
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 73 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=-1 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 73 pg[6.a( v 38'39 (0'0,38'39] lb MIN local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=-1 lpr=72 DELETING pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.004304 1 0.000149
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 73 pg[6.a( v 38'39 (0'0,38'39] lb MIN local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=-1 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.005912 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 73 pg[6.a( v 38'39 (0'0,38'39] lb MIN local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=-1 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.206918 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 1785856 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:12.934089+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:42.062909+0000 osd.1 (osd.1) 34 : cluster [DBG] 3.1c scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:42.076977+0000 osd.1 (osd.1) 35 : cluster [DBG] 3.1c scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcad4000/0x0/0x4ffc00000, data 0xd7580/0x149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.7 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.7 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 1777664 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 35) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:42.062909+0000 osd.1 (osd.1) 34 : cluster [DBG] 3.1c scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:42.076977+0000 osd.1 (osd.1) 35 : cluster [DBG] 3.1c scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:13.934284+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:43.044380+0000 osd.1 (osd.1) 36 : cluster [DBG] 7.7 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:43.058505+0000 osd.1 (osd.1) 37 : cluster [DBG] 7.7 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 1769472 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 37) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:43.044380+0000 osd.1 (osd.1) 36 : cluster [DBG] 7.7 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:43.058505+0000 osd.1 (osd.1) 37 : cluster [DBG] 7.7 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcad4000/0x0/0x4ffc00000, data 0xd7580/0x149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:14.934557+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.456120491s of 11.619467735s, submitted: 43
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 1753088 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:15.934726+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:45.011160+0000 osd.1 (osd.1) 38 : cluster [DBG] 7.b scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:45.025229+0000 osd.1 (osd.1) 39 : cluster [DBG] 7.b scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 1753088 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 613231 data_alloc: 218103808 data_used: 106496
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 39) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:45.011160+0000 osd.1 (osd.1) 38 : cluster [DBG] 7.b scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:45.025229+0000 osd.1 (osd.1) 39 : cluster [DBG] 7.b scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:16.934963+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:46.010506+0000 osd.1 (osd.1) 40 : cluster [DBG] 7.d scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:46.024551+0000 osd.1 (osd.1) 41 : cluster [DBG] 7.d scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 1744896 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 41) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:46.010506+0000 osd.1 (osd.1) 40 : cluster [DBG] 7.d scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:46.024551+0000 osd.1 (osd.1) 41 : cluster [DBG] 7.d scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:17.935151+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 1744896 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 74 pg[6.b(unlocked)] enter Initial
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=0 pi=[57,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000068 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=0 pi=[57,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000030
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000018 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:12 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2545138310' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 11 04:55:12 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/705627932' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 11 04:55:12 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3270935877' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 11 04:55:12 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2249552361' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 11 04:55:12 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/745783834' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 11 04:55:12 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4236166018' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000204 1 0.000076
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 74 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001377 2 0.000063
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 74 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 74 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 74 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:18.935420+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 1720320 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 74 handle_osd_map epochs [74,75], i have 75, src has [1,75]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 75 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.712364 2 0.000085
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 75 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.714027 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 75 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 75 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=74/75 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 75 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=74/75 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 75 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=74/75 n=1 ec=47/21 lis/c=74/57 les/c/f=75/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.005552 3 0.000272
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 75 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=74/75 n=1 ec=47/21 lis/c=74/57 les/c/f=75/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 75 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=74/75 n=1 ec=47/21 lis/c=74/57 les/c/f=75/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000163 1 0.000070
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 75 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=74/75 n=1 ec=47/21 lis/c=74/57 les/c/f=75/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 75 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=74/75 n=1 ec=47/21 lis/c=74/57 les/c/f=75/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000013 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 75 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=74/75 n=1 ec=47/21 lis/c=74/57 les/c/f=75/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 75 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=74/75 n=1 ec=47/21 lis/c=74/57 les/c/f=75/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 38'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.011584 3 0.000128
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 75 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=74/75 n=1 ec=47/21 lis/c=74/57 les/c/f=75/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 38'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 75 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=74/75 n=1 ec=47/21 lis/c=74/57 les/c/f=75/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 38'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000021 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 75 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=74/75 n=1 ec=47/21 lis/c=74/57 les/c/f=75/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 38'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:19.935631+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:49.058086+0000 osd.1 (osd.1) 42 : cluster [DBG] 7.10 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:49.071948+0000 osd.1 (osd.1) 43 : cluster [DBG] 7.10 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 75 heartbeat osd_stat(store_statfs(0x4fcad1000/0x0/0x4ffc00000, data 0xd9298/0x14c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 1712128 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 43) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:49.058086+0000 osd.1 (osd.1) 42 : cluster [DBG] 7.10 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:49.071948+0000 osd.1 (osd.1) 43 : cluster [DBG] 7.10 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:20.935858+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:50.058208+0000 osd.1 (osd.1) 44 : cluster [DBG] 7.12 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:50.072273+0000 osd.1 (osd.1) 45 : cluster [DBG] 7.12 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 75 heartbeat osd_stat(store_statfs(0x4fcacd000/0x0/0x4ffc00000, data 0xdaccb/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 45.261066 79 0.000288
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 45.270424 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 45.270509 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 45.270547 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76 pruub=10.739331245s) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 125.969543457s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76 pruub=10.739258766s) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 125.969543457s@ mbc={}] exit Reset 0.000117 1 0.000189
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76 pruub=10.739258766s) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 125.969543457s@ mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76 pruub=10.739258766s) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 125.969543457s@ mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76 pruub=10.739258766s) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 125.969543457s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76 pruub=10.739258766s) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 125.969543457s@ mbc={}] exit Start 0.000016 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76 pruub=10.739258766s) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 125.969543457s@ mbc={}] enter Started/Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 45.251846 79 0.000271
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 45.267150 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 45.267302 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 45.267387 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'581 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76 pruub=10.747873306s) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 125.979316711s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76 pruub=10.747629166s) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 125.979316711s@ mbc={}] exit Reset 0.000319 1 0.000506
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76 pruub=10.747629166s) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 125.979316711s@ mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76 pruub=10.747629166s) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 125.979316711s@ mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76 pruub=10.747629166s) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 125.979316711s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76 pruub=10.747629166s) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 125.979316711s@ mbc={}] exit Start 0.000105 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 76 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76 pruub=10.747629166s) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 125.979316711s@ mbc={}] enter Started/Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 76 handle_osd_map epochs [76,76], i have 76, src has [1,76]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 1671168 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 628455 data_alloc: 218103808 data_used: 114688
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 45) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:50.058208+0000 osd.1 (osd.1) 44 : cluster [DBG] 7.12 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:50.072273+0000 osd.1 (osd.1) 45 : cluster [DBG] 7.12 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.507045 3 0.000089
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.507111 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000100 1 0.000139
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000012 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.506954 3 0.000288
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.507155 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=76) [2] r=-1 lpr=76 pi=[49,76)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000127 1 0.000161
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000011 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.005288 2 0.000068
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 77 handle_osd_map epochs [77,77], i have 77, src has [1,77]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003846 2 0.000065
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000047 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000126 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000129 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 77 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:21.936103+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1581056 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 77 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004793 3 0.000238
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.008879 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004924 3 0.000299
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.010481 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.007877 5 0.000562
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.008408 5 0.000438
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.001824 1 0.000115
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000501 1 0.000081
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.056381 2 0.000067
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.058296 1 0.000069
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000846 1 0.000058
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.044665 2 0.000110
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:22.936237+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[6.d(unlocked)] enter Initial
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=0 pi=[61,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000059 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=0 pi=[61,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000056
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000113 1 0.000052
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[6.d( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.000851 2 0.000047
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[6.d( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[6.d( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000021 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 78 pg[6.d( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 1572864 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 78 handle_osd_map epochs [79,79], i have 79, src has [1,79]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[6.d( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.682531 2 0.000159
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[6.d( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.683654 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[6.d( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[6.d( v 38'39 lc 34'13 (0'0,38'39] local-lis/les=78/79 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=38'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.906856 1 0.000106
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.019595 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.030138 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.030599 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79 pruub=14.988212585s) [2] async=[2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 132.756530762s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79 pruub=14.987908363s) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.756530762s@ mbc={}] exit Reset 0.000431 1 0.001125
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.953639 1 0.000145
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.020719 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.029657 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79 pruub=14.987908363s) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.756530762s@ mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79 pruub=14.987908363s) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.756530762s@ mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.029722 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79 pruub=14.987908363s) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.756530762s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79 pruub=14.987908363s) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.756530762s@ mbc={}] exit Start 0.000127 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[49,77)/1 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79 pruub=14.987908363s) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.756530762s@ mbc={}] enter Started/Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79 pruub=14.986744881s) [2] async=[2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 active pruub 132.755737305s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79 pruub=14.986459732s) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.755737305s@ mbc={}] exit Reset 0.000437 1 0.000694
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79 pruub=14.986459732s) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.755737305s@ mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79 pruub=14.986459732s) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.755737305s@ mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79 pruub=14.986459732s) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.755737305s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79 pruub=14.986459732s) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.755737305s@ mbc={}] exit Start 0.000072 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79 pruub=14.986459732s) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.755737305s@ mbc={}] enter Started/Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[6.d( v 38'39 lc 34'13 (0'0,38'39] local-lis/les=78/79 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[6.d( v 38'39 lc 34'13 (0'0,38'39] local-lis/les=78/79 n=1 ec=47/21 lis/c=78/61 les/c/f=79/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.006011 4 0.000354
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[6.d( v 38'39 lc 34'13 (0'0,38'39] local-lis/les=78/79 n=1 ec=47/21 lis/c=78/61 les/c/f=79/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[6.d( v 38'39 lc 34'13 (0'0,38'39] local-lis/les=78/79 n=1 ec=47/21 lis/c=78/61 les/c/f=79/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000072 1 0.000055
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[6.d( v 38'39 lc 34'13 (0'0,38'39] local-lis/les=78/79 n=1 ec=47/21 lis/c=78/61 les/c/f=79/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[6.d( v 38'39 lc 34'13 (0'0,38'39] local-lis/les=78/79 n=1 ec=47/21 lis/c=78/61 les/c/f=79/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000018 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[6.d( v 38'39 lc 34'13 (0'0,38'39] local-lis/les=78/79 n=1 ec=47/21 lis/c=78/61 les/c/f=79/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 79 handle_osd_map epochs [79,79], i have 79, src has [1,79]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 79 handle_osd_map epochs [79,79], i have 79, src has [1,79]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=78/79 n=1 ec=47/21 lis/c=78/61 les/c/f=79/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 38'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.067522 2 0.000125
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=78/79 n=1 ec=47/21 lis/c=78/61 les/c/f=79/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 38'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=78/79 n=1 ec=47/21 lis/c=78/61 les/c/f=79/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 38'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000020 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 79 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=78/79 n=1 ec=47/21 lis/c=78/61 les/c/f=79/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 38'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:23.936416+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 1507328 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:24.936598+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.865801811s of 10.039705276s, submitted: 43
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 80 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.383874 6 0.000290
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 80 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 80 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.384362 6 0.000582
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 80 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 80 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 80 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 80 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000155 1 0.000078
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 80 pg[9.c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 80 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000387 1 0.000169
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 80 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 80 pg[9.c( v 41'581 (0'0,41'581] lb MIN local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=-1 lpr=79 DELETING pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.050521 3 0.000262
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 80 pg[9.c( v 41'581 (0'0,41'581] lb MIN local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.050734 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 80 pg[9.c( v 41'581 (0'0,41'581] lb MIN local-lis/les=77/78 n=7 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.435462 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 80 pg[9.1c( v 41'581 (0'0,41'581] lb MIN local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=-1 lpr=79 DELETING pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.102319 3 0.000151
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 80 pg[9.1c( v 41'581 (0'0,41'581] lb MIN local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.102791 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 80 pg[9.1c( v 41'581 (0'0,41'581] lb MIN local-lis/les=77/78 n=6 ec=49/34 lis/c=77/49 les/c/f=78/50/0 sis=79) [2] r=-1 lpr=79 pi=[49,79)/1 crt=41'581 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.486848 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1499136 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 80 heartbeat osd_stat(store_statfs(0x4fcac1000/0x0/0x4ffc00000, data 0xe1c6f/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:25.936725+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1499136 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 627882 data_alloc: 218103808 data_used: 110592
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:26.936860+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:56.078542+0000 osd.1 (osd.1) 46 : cluster [DBG] 7.14 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:56.092638+0000 osd.1 (osd.1) 47 : cluster [DBG] 7.14 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1458176 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 47) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:56.078542+0000 osd.1 (osd.1) 46 : cluster [DBG] 7.14 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:56.092638+0000 osd.1 (osd.1) 47 : cluster [DBG] 7.14 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:27.937115+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 80 heartbeat osd_stat(store_statfs(0x4fcabe000/0x0/0x4ffc00000, data 0xe3921/0x15e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 80 handle_osd_map epochs [81,82], i have 80, src has [1,82]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 80 handle_osd_map epochs [81,82], i have 82, src has [1,82]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1441792 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:28.937239+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:58.137252+0000 osd.1 (osd.1) 48 : cluster [DBG] 7.16 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:58.151506+0000 osd.1 (osd.1) 49 : cluster [DBG] 7.16 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1441792 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 49) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:58.137252+0000 osd.1 (osd.1) 48 : cluster [DBG] 7.16 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:58.151506+0000 osd.1 (osd.1) 49 : cluster [DBG] 7.16 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 83 heartbeat osd_stat(store_statfs(0x4fcab5000/0x0/0x4ffc00000, data 0xe8be9/0x167000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:29.937390+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:59.119388+0000 osd.1 (osd.1) 50 : cluster [DBG] 7.17 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:28:59.133508+0000 osd.1 (osd.1) 51 : cluster [DBG] 7.17 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1433600 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 51) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:59.119388+0000 osd.1 (osd.1) 50 : cluster [DBG] 7.17 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:28:59.133508+0000 osd.1 (osd.1) 51 : cluster [DBG] 7.17 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:30.937621+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:00.118990+0000 osd.1 (osd.1) 52 : cluster [DBG] 7.19 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:00.133103+0000 osd.1 (osd.1) 53 : cluster [DBG] 7.19 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 1425408 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 638674 data_alloc: 218103808 data_used: 114688
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 53) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:00.118990+0000 osd.1 (osd.1) 52 : cluster [DBG] 7.19 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:00.133103+0000 osd.1 (osd.1) 53 : cluster [DBG] 7.19 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:31.937860+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 84 heartbeat osd_stat(store_statfs(0x4fcab3000/0x0/0x4ffc00000, data 0xea766/0x16a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1409024 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:32.938005+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1409024 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:33.938175+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1400832 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:34.938403+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:04.103142+0000 osd.1 (osd.1) 54 : cluster [DBG] 7.1d scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:04.117050+0000 osd.1 (osd.1) 55 : cluster [DBG] 7.1d scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.005683899s of 10.098912239s, submitted: 20
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1392640 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 55) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:04.103142+0000 osd.1 (osd.1) 54 : cluster [DBG] 7.1d scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:04.117050+0000 osd.1 (osd.1) 55 : cluster [DBG] 7.1d scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:35.938843+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:05.150004+0000 osd.1 (osd.1) 56 : cluster [DBG] 7.1e scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:05.164054+0000 osd.1 (osd.1) 57 : cluster [DBG] 7.1e scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 84 handle_osd_map epochs [85,86], i have 84, src has [1,86]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1400832 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 653561 data_alloc: 218103808 data_used: 143360
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 57) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:05.150004+0000 osd.1 (osd.1) 56 : cluster [DBG] 7.1e scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:05.164054+0000 osd.1 (osd.1) 57 : cluster [DBG] 7.1e scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:36.939298+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:06.123607+0000 osd.1 (osd.1) 58 : cluster [DBG] 8.1 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:06.137667+0000 osd.1 (osd.1) 59 : cluster [DBG] 8.1 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1392640 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 59) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:06.123607+0000 osd.1 (osd.1) 58 : cluster [DBG] 8.1 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:06.137667+0000 osd.1 (osd.1) 59 : cluster [DBG] 8.1 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:37.939533+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:07.103632+0000 osd.1 (osd.1) 60 : cluster [DBG] 8.3 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:07.117438+0000 osd.1 (osd.1) 61 : cluster [DBG] 8.3 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 86 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xede60/0x170000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 86 handle_osd_map epochs [87,88], i have 86, src has [1,88]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 86 handle_osd_map epochs [87,88], i have 88, src has [1,88]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 1335296 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 61) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:07.103632+0000 osd.1 (osd.1) 60 : cluster [DBG] 8.3 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:07.117438+0000 osd.1 (osd.1) 61 : cluster [DBG] 8.3 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 89 pg[9.15(unlocked)] enter Initial
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=0 lpr=0 pi=[56,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000151 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=0 lpr=0 pi=[56,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000042 1 0.000086
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000207 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000255 1 0.000422
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000110 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 89 handle_osd_map epochs [89,89], i have 89, src has [1,89]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000514 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:38.939735+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 1335296 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 89 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 90 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.892115 2 0.000327
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 90 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.892781 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 90 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.893099 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 90 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 90 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 90 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000493 1 0.000649
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 90 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 90 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 90 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 90 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000254 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 90 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:39.939928+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:09.073075+0000 osd.1 (osd.1) 62 : cluster [DBG] 8.5 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:09.087136+0000 osd.1 (osd.1) 63 : cluster [DBG] 8.5 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1327104 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 63) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:09.073075+0000 osd.1 (osd.1) 62 : cluster [DBG] 8.5 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:09.087136+0000 osd.1 (osd.1) 63 : cluster [DBG] 8.5 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:40.940117+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 91 pg[9.15( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.320204 5 0.000538
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 91 pg[9.15( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 91 pg[9.15( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 91 pg[9.15( v 41'581 lc 40'254 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.005823 4 0.000511
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 91 pg[9.15( v 41'581 lc 40'254 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 91 pg[9.15( v 41'581 lc 40'254 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000224 1 0.000049
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 91 pg[9.15( v 41'581 lc 40'254 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 91 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.044544 1 0.000152
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 91 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1433600 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 680732 data_alloc: 218103808 data_used: 143360
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:41.940422+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.765672 1 0.000055
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.816455 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started 2.137091 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[56,90)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Reset 0.000134 1 0.000265
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Start 0.000042 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000140
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: merge_log_dups log.dups.size()=0olog.dups.size()=17
Oct 11 04:55:12 compute-0 ceph-osd[88467]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=17
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001174 3 0.000062
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1409024 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 92 heartbeat osd_stat(store_statfs(0x4fca9a000/0x0/0x4ffc00000, data 0xf80f0/0x183000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:42.940580+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 93 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990432 2 0.000122
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 93 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991778 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 93 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 93 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 93 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 93 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/56 les/c/f=93/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011160 3 0.000720
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 93 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/56 les/c/f=93/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 93 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/56 les/c/f=93/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000025 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 93 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/56 les/c/f=93/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 93 handle_osd_map epochs [93,93], i have 93, src has [1,93]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fca96000/0x0/0x4ffc00000, data 0xf9b44/0x186000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1499136 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:43.940787+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:13.134931+0000 osd.1 (osd.1) 64 : cluster [DBG] 8.7 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:13.149041+0000 osd.1 (osd.1) 65 : cluster [DBG] 8.7 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 65) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:13.134931+0000 osd.1 (osd.1) 64 : cluster [DBG] 8.7 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:13.149041+0000 osd.1 (osd.1) 65 : cluster [DBG] 8.7 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1490944 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:44.940995+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 93 handle_osd_map epochs [94,94], i have 93, src has [1,95]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.736536980s of 10.001324654s, submitted: 109
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1482752 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:45.941177+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:15.135039+0000 osd.1 (osd.1) 66 : cluster [DBG] 8.8 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:15.149309+0000 osd.1 (osd.1) 67 : cluster [DBG] 8.8 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 67) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:15.135039+0000 osd.1 (osd.1) 66 : cluster [DBG] 8.8 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:15.149309+0000 osd.1 (osd.1) 67 : cluster [DBG] 8.8 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1482752 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695533 data_alloc: 218103808 data_used: 151552
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:46.941394+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:16.137968+0000 osd.1 (osd.1) 68 : cluster [DBG] 8.a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:16.152073+0000 osd.1 (osd.1) 69 : cluster [DBG] 8.a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 69) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:16.137968+0000 osd.1 (osd.1) 68 : cluster [DBG] 8.a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:16.152073+0000 osd.1 (osd.1) 69 : cluster [DBG] 8.a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 95 heartbeat osd_stat(store_statfs(0x4fca90000/0x0/0x4ffc00000, data 0xfcfbf/0x18c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1466368 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:47.941601+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1466368 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:48.941804+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1466368 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 95 heartbeat osd_stat(store_statfs(0x4fca90000/0x0/0x4ffc00000, data 0xfcfbf/0x18c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 95 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:49.942016+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:19.097223+0000 osd.1 (osd.1) 70 : cluster [DBG] 8.13 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:19.111420+0000 osd.1 (osd.1) 71 : cluster [DBG] 8.13 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 71) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:19.097223+0000 osd.1 (osd.1) 70 : cluster [DBG] 8.13 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:19.111420+0000 osd.1 (osd.1) 71 : cluster [DBG] 8.13 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1458176 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:50.942212+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 401408 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 702433 data_alloc: 218103808 data_used: 163840
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:51.942399+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 393216 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:52.942615+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 393216 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:53.942790+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 376832 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 98 handle_osd_map epochs [99,100], i have 98, src has [1,100]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:54.942988+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 360448 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 100 heartbeat osd_stat(store_statfs(0x4fca82000/0x0/0x4ffc00000, data 0x105839/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 100 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.317509651s of 10.379743576s, submitted: 15
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:55.943146+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:25.112188+0000 osd.1 (osd.1) 72 : cluster [DBG] 9.2 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:25.147419+0000 osd.1 (osd.1) 73 : cluster [DBG] 9.2 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 352256 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720478 data_alloc: 218103808 data_used: 167936
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 73) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:25.112188+0000 osd.1 (osd.1) 72 : cluster [DBG] 9.2 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:25.147419+0000 osd.1 (osd.1) 73 : cluster [DBG] 9.2 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:56.943411+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 303104 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 103 heartbeat osd_stat(store_statfs(0x4fca76000/0x0/0x4ffc00000, data 0x10a831/0x1a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:57.943541+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:27.146649+0000 osd.1 (osd.1) 74 : cluster [DBG] 8.16 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:27.160736+0000 osd.1 (osd.1) 75 : cluster [DBG] 8.16 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 294912 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 75) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:27.146649+0000 osd.1 (osd.1) 74 : cluster [DBG] 8.16 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:27.160736+0000 osd.1 (osd.1) 75 : cluster [DBG] 8.16 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:58.943792+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:28.159012+0000 osd.1 (osd.1) 76 : cluster [DBG] 8.17 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:28.173236+0000 osd.1 (osd.1) 77 : cluster [DBG] 8.17 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 294912 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 77) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:28.159012+0000 osd.1 (osd.1) 76 : cluster [DBG] 8.17 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:28.173236+0000 osd.1 (osd.1) 77 : cluster [DBG] 8.17 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:59.944036+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 262144 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:00.944165+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:30.162796+0000 osd.1 (osd.1) 78 : cluster [DBG] 8.19 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:30.176926+0000 osd.1 (osd.1) 79 : cluster [DBG] 8.19 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 253952 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 730972 data_alloc: 218103808 data_used: 167936
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 79) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:30.162796+0000 osd.1 (osd.1) 78 : cluster [DBG] 8.19 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:30.176926+0000 osd.1 (osd.1) 79 : cluster [DBG] 8.19 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:01.944385+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 105 handle_osd_map epochs [106,107], i have 105, src has [1,107]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 253952 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 107 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0x10de13/0x1aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:02.944545+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 253952 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:03.944701+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 229376 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 107 handle_osd_map epochs [109,110], i have 107, src has [1,110]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 107 handle_osd_map epochs [108,110], i have 107, src has [1,110]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 107 handle_osd_map epochs [108,110], i have 110, src has [1,110]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 110 pg[9.1f(unlocked)] enter Initial
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=0 lpr=0 pi=[67,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000061 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=0 lpr=0 pi=[67,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000029
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000214 1 0.000056
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000035 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000279 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:04.944838+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:34.199362+0000 osd.1 (osd.1) 80 : cluster [DBG] 8.1e scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:34.213413+0000 osd.1 (osd.1) 81 : cluster [DBG] 8.1e scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 81920 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.316640854s of 10.404881477s, submitted: 21
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:05.945445+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 110 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 81) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:34.199362+0000 osd.1 (osd.1) 80 : cluster [DBG] 8.1e scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:34.213413+0000 osd.1 (osd.1) 81 : cluster [DBG] 8.1e scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.023195 2 0.000078
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.023509 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.023552 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000640 1 0.000701
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000116 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 73728 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 751638 data_alloc: 218103808 data_used: 176128
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:06.945577+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 111 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 112 pg[9.1f( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=6 mbc={}] exit Started/Stray 0.999648 6 0.000250
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 112 pg[9.1f( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 112 pg[9.1f( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 112 pg[9.1f( v 41'581 lc 40'113 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.007656 3 0.000161
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 112 pg[9.1f( v 41'581 lc 40'113 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 112 pg[9.1f( v 41'581 lc 40'113 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000129 1 0.000055
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 112 pg[9.1f( v 41'581 lc 40'113 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.087760 1 0.000037
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 112 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x119ac5/0x1c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 81920 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:07.946214+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:37.199076+0000 osd.1 (osd.1) 82 : cluster [DBG] 9.4 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:37.252133+0000 osd.1 (osd.1) 83 : cluster [DBG] 9.4 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 83) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:37.199076+0000 osd.1 (osd.1) 82 : cluster [DBG] 9.4 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:37.252133+0000 osd.1 (osd.1) 83 : cluster [DBG] 9.4 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.942088 1 0.000069
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.037817 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started 2.037681 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[67,111)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Reset 0.000270 1 0.000419
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Start 0.000106 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000078 1 0.000345
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: merge_log_dups log.dups.size()=0olog.dups.size()=17
Oct 11 04:55:12 compute-0 ceph-osd[88467]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=17
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001005 3 0.000115
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 81920 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:08.946712+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.014173 2 0.000092
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.015404 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=113/114 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=113/114 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=113/114 n=6 ec=49/34 lis/c=113/67 les/c/f=114/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005296 3 0.000158
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=113/114 n=6 ec=49/34 lis/c=113/67 les/c/f=114/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=113/114 n=6 ec=49/34 lis/c=113/67 les/c/f=114/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000028 0 0.000000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=113/114 n=6 ec=49/34 lis/c=113/67 les/c/f=114/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 172032 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:09.947058+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:39.249066+0000 osd.1 (osd.1) 84 : cluster [DBG] 9.a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:39.298662+0000 osd.1 (osd.1) 85 : cluster [DBG] 9.a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 85) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:39.249066+0000 osd.1 (osd.1) 84 : cluster [DBG] 9.a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:39.298662+0000 osd.1 (osd.1) 85 : cluster [DBG] 9.a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 155648 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:10.947570+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 772201 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 155648 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:11.948642+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca57000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 155648 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:12.948876+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:42.265402+0000 osd.1 (osd.1) 86 : cluster [DBG] 9.10 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:42.286545+0000 osd.1 (osd.1) 87 : cluster [DBG] 9.10 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 87) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:42.265402+0000 osd.1 (osd.1) 86 : cluster [DBG] 9.10 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:42.286545+0000 osd.1 (osd.1) 87 : cluster [DBG] 9.10 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 147456 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:13.949404+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 139264 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:14.950192+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:44.270049+0000 osd.1 (osd.1) 88 : cluster [DBG] 9.12 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:44.298505+0000 osd.1 (osd.1) 89 : cluster [DBG] 9.12 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 89) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:44.270049+0000 osd.1 (osd.1) 88 : cluster [DBG] 9.12 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:44.298505+0000 osd.1 (osd.1) 89 : cluster [DBG] 9.12 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 131072 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:15.950972+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 773617 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 122880 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:16.951378+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.14 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.083987236s of 11.304721832s, submitted: 34
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.14 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 122880 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:17.952068+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:47.240772+0000 osd.1 (osd.1) 90 : cluster [DBG] 9.14 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:47.279501+0000 osd.1 (osd.1) 91 : cluster [DBG] 9.14 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 91) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:47.240772+0000 osd.1 (osd.1) 90 : cluster [DBG] 9.14 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:47.279501+0000 osd.1 (osd.1) 91 : cluster [DBG] 9.14 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 114688 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:18.952407+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:48.286116+0000 osd.1 (osd.1) 92 : cluster [DBG] 9.1a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:48.321291+0000 osd.1 (osd.1) 93 : cluster [DBG] 9.1a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 93) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:48.286116+0000 osd.1 (osd.1) 92 : cluster [DBG] 9.1a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:48.321291+0000 osd.1 (osd.1) 93 : cluster [DBG] 9.1a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 114688 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:19.952693+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 106496 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:20.952836+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:50.270570+0000 osd.1 (osd.1) 94 : cluster [DBG] 11.5 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:50.284651+0000 osd.1 (osd.1) 95 : cluster [DBG] 11.5 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 95) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:50.270570+0000 osd.1 (osd.1) 94 : cluster [DBG] 11.5 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:50.284651+0000 osd.1 (osd.1) 95 : cluster [DBG] 11.5 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 777061 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 106496 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:21.953000+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 98304 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:22.953416+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 98304 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:23.953838+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 98304 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:24.954205+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 90112 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:25.954503+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 777061 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 81920 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:26.954768+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.888096809s of 10.003040314s, submitted: 8
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 73728 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:27.954935+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:57.229918+0000 osd.1 (osd.1) 96 : cluster [DBG] 11.7 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:57.243967+0000 osd.1 (osd.1) 97 : cluster [DBG] 11.7 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 97) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:57.229918+0000 osd.1 (osd.1) 96 : cluster [DBG] 11.7 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:57.243967+0000 osd.1 (osd.1) 97 : cluster [DBG] 11.7 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 73728 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:28.955216+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 65536 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:29.955399+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 65536 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:30.955593+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 778209 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 65536 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:31.955735+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 57344 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:32.955889+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 57344 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:33.956031+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 49152 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:34.956227+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:04.141533+0000 osd.1 (osd.1) 98 : cluster [DBG] 11.a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:04.155744+0000 osd.1 (osd.1) 99 : cluster [DBG] 11.a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 99) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:04.141533+0000 osd.1 (osd.1) 98 : cluster [DBG] 11.a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:04.155744+0000 osd.1 (osd.1) 99 : cluster [DBG] 11.a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 49152 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:35.956412+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 779357 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 40960 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:36.956554+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 40960 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:37.956711+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.c scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.917269707s of 10.924452782s, submitted: 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.c scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 40960 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:38.956865+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:08.168312+0000 osd.1 (osd.1) 100 : cluster [DBG] 11.c scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:08.182444+0000 osd.1 (osd.1) 101 : cluster [DBG] 11.c scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 101) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:08.168312+0000 osd.1 (osd.1) 100 : cluster [DBG] 11.c scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:08.182444+0000 osd.1 (osd.1) 101 : cluster [DBG] 11.c scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 32768 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:39.957082+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 32768 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:40.957217+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.13 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.13 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 781654 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 24576 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:41.957411+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:11.225658+0000 osd.1 (osd.1) 102 : cluster [DBG] 11.13 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:11.239831+0000 osd.1 (osd.1) 103 : cluster [DBG] 11.13 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 103) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:11.225658+0000 osd.1 (osd.1) 102 : cluster [DBG] 11.13 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:11.239831+0000 osd.1 (osd.1) 103 : cluster [DBG] 11.13 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 24576 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:42.957709+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 16384 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:43.957858+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 16384 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:44.958009+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 16384 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:45.958103+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782803 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 0 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:46.958249+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:16.239201+0000 osd.1 (osd.1) 104 : cluster [DBG] 11.16 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:16.253303+0000 osd.1 (osd.1) 105 : cluster [DBG] 11.16 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 105) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:16.239201+0000 osd.1 (osd.1) 104 : cluster [DBG] 11.16 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:16.253303+0000 osd.1 (osd.1) 105 : cluster [DBG] 11.16 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 0 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:47.958392+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.075263977s of 10.095630646s, submitted: 6
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:48.958549+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:18.263776+0000 osd.1 (osd.1) 106 : cluster [DBG] 11.1d scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:18.277845+0000 osd.1 (osd.1) 107 : cluster [DBG] 11.1d scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 107) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:18.263776+0000 osd.1 (osd.1) 106 : cluster [DBG] 11.1d scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:18.277845+0000 osd.1 (osd.1) 107 : cluster [DBG] 11.1d scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:49.958822+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:50.958950+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 785100 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:51.959055+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:21.285041+0000 osd.1 (osd.1) 108 : cluster [DBG] 5.11 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:21.299161+0000 osd.1 (osd.1) 109 : cluster [DBG] 5.11 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 109) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:21.285041+0000 osd.1 (osd.1) 108 : cluster [DBG] 5.11 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:21.299161+0000 osd.1 (osd.1) 109 : cluster [DBG] 5.11 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 1015808 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:52.959298+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 1007616 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:53.959378+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 1007616 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:54.959518+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 999424 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:55.959635+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 785100 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 991232 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:56.959790+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 983040 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:57.959915+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 983040 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:58.960064+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 983040 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:59.960252+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 974848 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:00.960424+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.090151787s of 13.102847099s, submitted: 4
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 786248 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 974848 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:01.960544+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:31.366679+0000 osd.1 (osd.1) 110 : cluster [DBG] 2.17 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:31.380696+0000 osd.1 (osd.1) 111 : cluster [DBG] 2.17 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 111) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:31.366679+0000 osd.1 (osd.1) 110 : cluster [DBG] 2.17 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:31.380696+0000 osd.1 (osd.1) 111 : cluster [DBG] 2.17 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 974848 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:02.960705+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.13 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.13 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 966656 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:03.960834+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:33.400863+0000 osd.1 (osd.1) 112 : cluster [DBG] 5.13 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:33.414957+0000 osd.1 (osd.1) 113 : cluster [DBG] 5.13 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 113) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:33.400863+0000 osd.1 (osd.1) 112 : cluster [DBG] 5.13 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:33.414957+0000 osd.1 (osd.1) 113 : cluster [DBG] 5.13 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.15 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.15 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 958464 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:04.961048+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:34.427300+0000 osd.1 (osd.1) 114 : cluster [DBG] 2.15 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:34.441389+0000 osd.1 (osd.1) 115 : cluster [DBG] 2.15 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 115) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:34.427300+0000 osd.1 (osd.1) 114 : cluster [DBG] 2.15 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:34.441389+0000 osd.1 (osd.1) 115 : cluster [DBG] 2.15 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.12 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.12 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 950272 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:05.961259+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:35.455580+0000 osd.1 (osd.1) 116 : cluster [DBG] 5.12 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:35.469699+0000 osd.1 (osd.1) 117 : cluster [DBG] 5.12 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 117) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:35.455580+0000 osd.1 (osd.1) 116 : cluster [DBG] 5.12 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:35.469699+0000 osd.1 (osd.1) 117 : cluster [DBG] 5.12 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 789692 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 950272 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:06.961514+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 942080 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:07.961653+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 942080 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:08.961772+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 942080 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:09.961965+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 933888 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:10.962108+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.115386009s of 10.143128395s, submitted: 8
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 790841 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 933888 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:11.962258+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:41.509792+0000 osd.1 (osd.1) 118 : cluster [DBG] 10.1a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:41.523906+0000 osd.1 (osd.1) 119 : cluster [DBG] 10.1a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 119) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:41.509792+0000 osd.1 (osd.1) 118 : cluster [DBG] 10.1a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:41.523906+0000 osd.1 (osd.1) 119 : cluster [DBG] 10.1a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 925696 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:12.962446+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 917504 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:13.962584+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:43.505893+0000 osd.1 (osd.1) 120 : cluster [DBG] 10.19 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:43.519802+0000 osd.1 (osd.1) 121 : cluster [DBG] 10.19 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 121) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:43.505893+0000 osd.1 (osd.1) 120 : cluster [DBG] 10.19 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:43.519802+0000 osd.1 (osd.1) 121 : cluster [DBG] 10.19 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 917504 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:14.962847+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 901120 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:15.962988+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:45.449498+0000 osd.1 (osd.1) 122 : cluster [DBG] 5.16 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:45.463546+0000 osd.1 (osd.1) 123 : cluster [DBG] 5.16 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 123) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:45.449498+0000 osd.1 (osd.1) 122 : cluster [DBG] 5.16 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:45.463546+0000 osd.1 (osd.1) 123 : cluster [DBG] 5.16 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 794285 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 901120 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:16.963176+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:46.409154+0000 osd.1 (osd.1) 124 : cluster [DBG] 5.9 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:46.423234+0000 osd.1 (osd.1) 125 : cluster [DBG] 5.9 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 125) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:46.409154+0000 osd.1 (osd.1) 124 : cluster [DBG] 5.9 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:46.423234+0000 osd.1 (osd.1) 125 : cluster [DBG] 5.9 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 892928 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:17.963403+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 892928 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:18.963565+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 884736 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:19.963740+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 884736 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:20.963871+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 794285 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 884736 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:21.964043+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.f scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.827279091s of 10.856827736s, submitted: 8
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.f scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 876544 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:22.964181+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:52.366819+0000 osd.1 (osd.1) 126 : cluster [DBG] 5.f scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:52.380757+0000 osd.1 (osd.1) 127 : cluster [DBG] 5.f scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 127) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:52.366819+0000 osd.1 (osd.1) 126 : cluster [DBG] 5.f scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:52.380757+0000 osd.1 (osd.1) 127 : cluster [DBG] 5.f scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 876544 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:23.964394+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 868352 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:24.964507+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:25.964599+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:55.333878+0000 osd.1 (osd.1) 128 : cluster [DBG] 10.6 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:55.348066+0000 osd.1 (osd.1) 129 : cluster [DBG] 10.6 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 860160 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 129) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:55.333878+0000 osd.1 (osd.1) 128 : cluster [DBG] 10.6 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:55.348066+0000 osd.1 (osd.1) 129 : cluster [DBG] 10.6 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796580 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:26.964790+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 851968 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:27.964914+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 851968 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:28.965061+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 851968 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.d deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.d deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:29.965236+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:59.439670+0000 osd.1 (osd.1) 130 : cluster [DBG] 2.d deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:59.453928+0000 osd.1 (osd.1) 131 : cluster [DBG] 2.d deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 843776 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.b scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.b scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 131) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:59.439670+0000 osd.1 (osd.1) 130 : cluster [DBG] 2.d deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:59.453928+0000 osd.1 (osd.1) 131 : cluster [DBG] 2.d deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:30.965420+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:00.487304+0000 osd.1 (osd.1) 132 : cluster [DBG] 10.b scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:00.500763+0000 osd.1 (osd.1) 133 : cluster [DBG] 10.b scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 827392 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 133) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:00.487304+0000 osd.1 (osd.1) 132 : cluster [DBG] 10.b scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:00.500763+0000 osd.1 (osd.1) 133 : cluster [DBG] 10.b scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 798875 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:31.965796+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 827392 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:32.965923+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 819200 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:33.966057+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 819200 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:34.966220+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 811008 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:35.966379+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 811008 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 798875 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:36.966536+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 802816 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.976668358s of 15.009184837s, submitted: 8
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:37.966679+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:07.375810+0000 osd.1 (osd.1) 134 : cluster [DBG] 10.2 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:07.389929+0000 osd.1 (osd.1) 135 : cluster [DBG] 10.2 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 135) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:07.375810+0000 osd.1 (osd.1) 134 : cluster [DBG] 10.2 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:07.389929+0000 osd.1 (osd.1) 135 : cluster [DBG] 10.2 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:38.966893+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:39.967098+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:09.358230+0000 osd.1 (osd.1) 136 : cluster [DBG] 2.5 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:09.371651+0000 osd.1 (osd.1) 137 : cluster [DBG] 2.5 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 137) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:09.358230+0000 osd.1 (osd.1) 136 : cluster [DBG] 2.5 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:09.371651+0000 osd.1 (osd.1) 137 : cluster [DBG] 2.5 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:40.967413+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:10.371094+0000 osd.1 (osd.1) 138 : cluster [DBG] 2.a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:10.385167+0000 osd.1 (osd.1) 139 : cluster [DBG] 2.a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 139) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:10.371094+0000 osd.1 (osd.1) 138 : cluster [DBG] 2.a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:10.385167+0000 osd.1 (osd.1) 139 : cluster [DBG] 2.a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 803464 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:41.967646+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:11.350620+0000 osd.1 (osd.1) 140 : cluster [DBG] 5.c scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:11.364688+0000 osd.1 (osd.1) 141 : cluster [DBG] 5.c scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 141) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:11.350620+0000 osd.1 (osd.1) 140 : cluster [DBG] 5.c scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:11.364688+0000 osd.1 (osd.1) 141 : cluster [DBG] 5.c scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:42.967908+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:43.968232+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:44.968528+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:45.968828+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:15.408884+0000 osd.1 (osd.1) 142 : cluster [DBG] 2.9 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:15.423000+0000 osd.1 (osd.1) 143 : cluster [DBG] 2.9 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 761856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 143) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:15.408884+0000 osd.1 (osd.1) 142 : cluster [DBG] 2.9 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:15.423000+0000 osd.1 (osd.1) 143 : cluster [DBG] 2.9 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805759 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:46.969306+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:16.419688+0000 osd.1 (osd.1) 144 : cluster [DBG] 4.14 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:16.433816+0000 osd.1 (osd.1) 145 : cluster [DBG] 4.14 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 761856 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 145) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:16.419688+0000 osd.1 (osd.1) 144 : cluster [DBG] 4.14 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:16.433816+0000 osd.1 (osd.1) 145 : cluster [DBG] 4.14 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:47.969610+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 753664 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.012090683s of 11.058633804s, submitted: 12
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:48.969826+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:18.434750+0000 osd.1 (osd.1) 146 : cluster [DBG] 4.12 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:18.448688+0000 osd.1 (osd.1) 147 : cluster [DBG] 4.12 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1802240 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 147) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:18.434750+0000 osd.1 (osd.1) 146 : cluster [DBG] 4.12 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:18.448688+0000 osd.1 (osd.1) 147 : cluster [DBG] 4.12 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:49.970114+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:19.454433+0000 osd.1 (osd.1) 148 : cluster [DBG] 4.f scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:19.468650+0000 osd.1 (osd.1) 149 : cluster [DBG] 4.f scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1794048 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 149) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:19.454433+0000 osd.1 (osd.1) 148 : cluster [DBG] 4.f scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:19.468650+0000 osd.1 (osd.1) 149 : cluster [DBG] 4.f scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:50.970382+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1785856 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 808054 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:51.970575+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1777664 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:52.970700+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1777664 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:53.970866+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 1769472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:54.971057+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 1769472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:55.971216+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:25.598089+0000 osd.1 (osd.1) 150 : cluster [DBG] 4.10 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:25.612133+0000 osd.1 (osd.1) 151 : cluster [DBG] 4.10 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 1769472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 151) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:25.598089+0000 osd.1 (osd.1) 150 : cluster [DBG] 4.10 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:25.612133+0000 osd.1 (osd.1) 151 : cluster [DBG] 4.10 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 809202 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:56.971423+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 1769472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:57.971562+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 1769472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:58.971723+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 1761280 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.112494469s of 11.135998726s, submitted: 6
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:59.972247+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:29.570895+0000 osd.1 (osd.1) 152 : cluster [DBG] 4.d scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:29.584774+0000 osd.1 (osd.1) 153 : cluster [DBG] 4.d scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 1761280 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 153) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:29.570895+0000 osd.1 (osd.1) 152 : cluster [DBG] 4.d scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:29.584774+0000 osd.1 (osd.1) 153 : cluster [DBG] 4.d scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:00.972405+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:30.576218+0000 osd.1 (osd.1) 154 : cluster [DBG] 6.1 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:30.590362+0000 osd.1 (osd.1) 155 : cluster [DBG] 6.1 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 1761280 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 155) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:30.576218+0000 osd.1 (osd.1) 154 : cluster [DBG] 6.1 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:30.590362+0000 osd.1 (osd.1) 155 : cluster [DBG] 6.1 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812643 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:01.972945+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:31.622934+0000 osd.1 (osd.1) 156 : cluster [DBG] 4.9 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:31.637036+0000 osd.1 (osd.1) 157 : cluster [DBG] 4.9 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1753088 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 157) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:31.622934+0000 osd.1 (osd.1) 156 : cluster [DBG] 4.9 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:31.637036+0000 osd.1 (osd.1) 157 : cluster [DBG] 4.9 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:02.973560+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:32.631401+0000 osd.1 (osd.1) 158 : cluster [DBG] 4.5 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:32.645470+0000 osd.1 (osd.1) 159 : cluster [DBG] 4.5 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1753088 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 159) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:32.631401+0000 osd.1 (osd.1) 158 : cluster [DBG] 4.5 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:32.645470+0000 osd.1 (osd.1) 159 : cluster [DBG] 4.5 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:03.973801+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:33.589326+0000 osd.1 (osd.1) 160 : cluster [DBG] 4.7 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:33.603410+0000 osd.1 (osd.1) 161 : cluster [DBG] 4.7 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1728512 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 161) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:33.589326+0000 osd.1 (osd.1) 160 : cluster [DBG] 4.7 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:33.603410+0000 osd.1 (osd.1) 161 : cluster [DBG] 4.7 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:04.974045+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1728512 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:05.974198+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 1720320 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 814937 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:06.974345+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 1720320 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:07.974489+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:37.527604+0000 osd.1 (osd.1) 162 : cluster [DBG] 4.8 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:37.541727+0000 osd.1 (osd.1) 163 : cluster [DBG] 4.8 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1712128 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 163) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:37.527604+0000 osd.1 (osd.1) 162 : cluster [DBG] 4.8 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:37.541727+0000 osd.1 (osd.1) 163 : cluster [DBG] 4.8 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.f scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.f scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:08.974853+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:38.542673+0000 osd.1 (osd.1) 164 : cluster [DBG] 10.f scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:38.556753+0000 osd.1 (osd.1) 165 : cluster [DBG] 10.f scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1712128 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 165) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:38.542673+0000 osd.1 (osd.1) 164 : cluster [DBG] 10.f scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:38.556753+0000 osd.1 (osd.1) 165 : cluster [DBG] 10.f scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:09.975025+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1703936 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.929329872s of 10.991934776s, submitted: 14
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:10.975152+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:40.562574+0000 osd.1 (osd.1) 166 : cluster [DBG] 2.7 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:40.576673+0000 osd.1 (osd.1) 167 : cluster [DBG] 2.7 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1703936 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 167) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:40.562574+0000 osd.1 (osd.1) 166 : cluster [DBG] 2.7 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:40.576673+0000 osd.1 (osd.1) 167 : cluster [DBG] 2.7 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:11.975418+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:41.521253+0000 osd.1 (osd.1) 168 : cluster [DBG] 2.6 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:41.535436+0000 osd.1 (osd.1) 169 : cluster [DBG] 2.6 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 819526 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1695744 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 169) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:41.521253+0000 osd.1 (osd.1) 168 : cluster [DBG] 2.6 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:41.535436+0000 osd.1 (osd.1) 169 : cluster [DBG] 2.6 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:12.975690+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1695744 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:13.975815+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1695744 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:14.975947+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1695744 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:15.976123+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1679360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:16.976274+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 819526 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1679360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:17.976383+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:47.518531+0000 osd.1 (osd.1) 170 : cluster [DBG] 5.1 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:47.532598+0000 osd.1 (osd.1) 171 : cluster [DBG] 5.1 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 171) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:47.518531+0000 osd.1 (osd.1) 170 : cluster [DBG] 5.1 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:47.532598+0000 osd.1 (osd.1) 171 : cluster [DBG] 5.1 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:18.976592+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:19.976755+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:49.516313+0000 osd.1 (osd.1) 172 : cluster [DBG] 10.11 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:49.530387+0000 osd.1 (osd.1) 173 : cluster [DBG] 10.11 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 173) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:49.516313+0000 osd.1 (osd.1) 172 : cluster [DBG] 10.11 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:49.530387+0000 osd.1 (osd.1) 173 : cluster [DBG] 10.11 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:20.976925+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:50.501402+0000 osd.1 (osd.1) 174 : cluster [DBG] 10.10 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:50.515412+0000 osd.1 (osd.1) 175 : cluster [DBG] 10.10 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 175) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:50.501402+0000 osd.1 (osd.1) 174 : cluster [DBG] 10.10 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:50.515412+0000 osd.1 (osd.1) 175 : cluster [DBG] 10.10 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.907430649s of 10.942457199s, submitted: 10
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:21.977152+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:51.505033+0000 osd.1 (osd.1) 176 : cluster [DBG] 2.1b scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:51.519096+0000 osd.1 (osd.1) 177 : cluster [DBG] 2.1b scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824119 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1646592 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 177) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:51.505033+0000 osd.1 (osd.1) 176 : cluster [DBG] 2.1b scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:51.519096+0000 osd.1 (osd.1) 177 : cluster [DBG] 2.1b scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:22.977422+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1646592 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:23.977617+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1646592 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:24.977765+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1638400 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:25.977923+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1638400 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:26.978070+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824119 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1630208 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.12 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.12 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:27.978218+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:57.544059+0000 osd.1 (osd.1) 178 : cluster [DBG] 10.12 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:57.558158+0000 osd.1 (osd.1) 179 : cluster [DBG] 10.12 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 1622016 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 179) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:57.544059+0000 osd.1 (osd.1) 178 : cluster [DBG] 10.12 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:57.558158+0000 osd.1 (osd.1) 179 : cluster [DBG] 10.12 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:28.978395+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:58.550164+0000 osd.1 (osd.1) 180 : cluster [DBG] 5.1d scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:58.564183+0000 osd.1 (osd.1) 181 : cluster [DBG] 5.1d scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 1613824 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 181) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:58.550164+0000 osd.1 (osd.1) 180 : cluster [DBG] 5.1d scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:58.564183+0000 osd.1 (osd.1) 181 : cluster [DBG] 5.1d scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:29.978570+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 1613824 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:30.978719+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1605632 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:31.978849+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826416 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1605632 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.966858864s of 10.997442245s, submitted: 6
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:32.978970+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:02.502532+0000 osd.1 (osd.1) 182 : cluster [DBG] 5.1a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:02.516728+0000 osd.1 (osd.1) 183 : cluster [DBG] 5.1a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1605632 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 183) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:02.502532+0000 osd.1 (osd.1) 182 : cluster [DBG] 5.1a scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:02.516728+0000 osd.1 (osd.1) 183 : cluster [DBG] 5.1a scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:33.979194+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:03.541654+0000 osd.1 (osd.1) 184 : cluster [DBG] 10.13 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:03.555791+0000 osd.1 (osd.1) 185 : cluster [DBG] 10.13 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1597440 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 185) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:03.541654+0000 osd.1 (osd.1) 184 : cluster [DBG] 10.13 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:03.555791+0000 osd.1 (osd.1) 185 : cluster [DBG] 10.13 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:34.979431+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:04.543710+0000 osd.1 (osd.1) 186 : cluster [DBG] 10.14 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:04.561321+0000 osd.1 (osd.1) 187 : cluster [DBG] 10.14 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1597440 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 187) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:04.543710+0000 osd.1 (osd.1) 186 : cluster [DBG] 10.14 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:04.561321+0000 osd.1 (osd.1) 187 : cluster [DBG] 10.14 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:35.979690+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:05.512407+0000 osd.1 (osd.1) 188 : cluster [DBG] 5.18 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:05.526487+0000 osd.1 (osd.1) 189 : cluster [DBG] 5.18 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1589248 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 189) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:05.512407+0000 osd.1 (osd.1) 188 : cluster [DBG] 5.18 deep-scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:05.526487+0000 osd.1 (osd.1) 189 : cluster [DBG] 5.18 deep-scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:36.979972+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831010 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1589248 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:37.980153+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:07.531605+0000 osd.1 (osd.1) 190 : cluster [DBG] 5.19 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:07.545940+0000 osd.1 (osd.1) 191 : cluster [DBG] 5.19 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1581056 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 191) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:07.531605+0000 osd.1 (osd.1) 190 : cluster [DBG] 5.19 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:07.545940+0000 osd.1 (osd.1) 191 : cluster [DBG] 5.19 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:38.980380+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1572864 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:39.980560+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1572864 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:40.980688+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1564672 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:41.980833+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 832158 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1564672 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.024751663s of 10.058527946s, submitted: 10
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:42.981002+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:12.561299+0000 osd.1 (osd.1) 192 : cluster [DBG] 6.2 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:12.575221+0000 osd.1 (osd.1) 193 : cluster [DBG] 6.2 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 1556480 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 193) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:12.561299+0000 osd.1 (osd.1) 192 : cluster [DBG] 6.2 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:12.575221+0000 osd.1 (osd.1) 193 : cluster [DBG] 6.2 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:43.981217+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 1622016 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:44.981447+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 1622016 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:45.981613+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 1622016 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:46.981755+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 833305 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 1613824 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:47.981881+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:17.550111+0000 osd.1 (osd.1) 194 : cluster [DBG] 6.6 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:17.567715+0000 osd.1 (osd.1) 195 : cluster [DBG] 6.6 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 195) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:17.550111+0000 osd.1 (osd.1) 194 : cluster [DBG] 6.6 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:17.567715+0000 osd.1 (osd.1) 195 : cluster [DBG] 6.6 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1605632 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:48.982157+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1605632 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:49.982410+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:19.566702+0000 osd.1 (osd.1) 196 : cluster [DBG] 6.e scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:19.584322+0000 osd.1 (osd.1) 197 : cluster [DBG] 6.e scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 197) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:19.566702+0000 osd.1 (osd.1) 196 : cluster [DBG] 6.e scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:19.584322+0000 osd.1 (osd.1) 197 : cluster [DBG] 6.e scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1597440 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:50.982685+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1597440 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.c scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.c scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:51.982918+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:21.494394+0000 osd.1 (osd.1) 198 : cluster [DBG] 6.c scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:21.512054+0000 osd.1 (osd.1) 199 : cluster [DBG] 6.c scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 199) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:21.494394+0000 osd.1 (osd.1) 198 : cluster [DBG] 6.c scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:21.512054+0000 osd.1 (osd.1) 199 : cluster [DBG] 6.c scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836746 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1581056 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:52.983240+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:22.506995+0000 osd.1 (osd.1) 200 : cluster [DBG] 6.4 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:22.535023+0000 osd.1 (osd.1) 201 : cluster [DBG] 6.4 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 201) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:22.506995+0000 osd.1 (osd.1) 200 : cluster [DBG] 6.4 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:22.535023+0000 osd.1 (osd.1) 201 : cluster [DBG] 6.4 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1581056 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:53.983461+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1572864 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:54.983637+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1572864 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.b scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.874675751s of 12.912480354s, submitted: 10
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.b scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:55.983780+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:25.473662+0000 osd.1 (osd.1) 202 : cluster [DBG] 6.b scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:25.491313+0000 osd.1 (osd.1) 203 : cluster [DBG] 6.b scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 203) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:25.473662+0000 osd.1 (osd.1) 202 : cluster [DBG] 6.b scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:25.491313+0000 osd.1 (osd.1) 203 : cluster [DBG] 6.b scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1564672 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.d scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.d scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:56.983998+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:26.438648+0000 osd.1 (osd.1) 204 : cluster [DBG] 6.d scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:26.459842+0000 osd.1 (osd.1) 205 : cluster [DBG] 6.d scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840187 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 205) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:26.438648+0000 osd.1 (osd.1) 204 : cluster [DBG] 6.d scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:26.459842+0000 osd.1 (osd.1) 205 : cluster [DBG] 6.d scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1564672 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:57.984249+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:27.463674+0000 osd.1 (osd.1) 206 : cluster [DBG] 9.15 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:27.495388+0000 osd.1 (osd.1) 207 : cluster [DBG] 9.15 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 207) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:27.463674+0000 osd.1 (osd.1) 206 : cluster [DBG] 9.15 scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:27.495388+0000 osd.1 (osd.1) 207 : cluster [DBG] 9.15 scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 1556480 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:58.984442+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 209 sent 207 num 2 unsent 2 sending 2
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:28.510218+0000 osd.1 (osd.1) 208 : cluster [DBG] 9.1f scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:28.545517+0000 osd.1 (osd.1) 209 : cluster [DBG] 9.1f scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 209) v1
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:28.510218+0000 osd.1 (osd.1) 208 : cluster [DBG] 9.1f scrub starts
Oct 11 04:55:12 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:28.545517+0000 osd.1 (osd.1) 209 : cluster [DBG] 9.1f scrub ok
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 1556480 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:59.984681+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1548288 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:00.984836+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1548288 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:01.984964+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1540096 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:02.985141+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1540096 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:03.985294+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1531904 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:04.985488+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1531904 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:05.985676+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 1523712 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:06.986128+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1531904 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:07.986250+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 1523712 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:08.986402+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 1523712 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:09.986537+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1515520 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:10.986657+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1515520 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:11.986799+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1515520 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:12.986938+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 1507328 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:13.987053+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 1507328 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:14.987231+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 1499136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:15.987403+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 1499136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:16.987585+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 1499136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:17.987773+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 1490944 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:18.987897+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 1490944 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:19.988100+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 1482752 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:20.988222+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 1482752 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:21.988417+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 1482752 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:22.988554+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1474560 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:23.988671+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1474560 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:24.988825+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 1466368 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:25.989000+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 1466368 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:26.989120+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1458176 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:27.989279+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1458176 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:28.989446+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1458176 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:29.989628+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 1449984 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:30.989790+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 1449984 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:31.989948+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 1441792 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:32.990179+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 1441792 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:33.990398+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 1441792 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:34.990689+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 1433600 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:35.990885+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 1433600 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:36.991026+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 1425408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:37.991225+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 1425408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:38.991413+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1417216 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:39.991621+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1417216 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:40.991771+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 1409024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:41.991956+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 1409024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:42.992165+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 1409024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:43.992306+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1400832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:44.992531+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1400832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:45.992686+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1392640 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:46.992838+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1392640 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:47.993017+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 1384448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:48.993192+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 1384448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:49.993439+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1376256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:50.993605+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1376256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:51.993796+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1376256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:52.993959+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1368064 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:53.994085+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1368064 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:54.994259+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1359872 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:55.994418+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1359872 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:56.994568+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1351680 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:57.994726+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1343488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:58.994849+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1343488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:59.995108+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1335296 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:00.995253+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1335296 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:01.995396+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1335296 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:02.995570+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1327104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:03.995779+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1327104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:04.995944+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 1318912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:05.996133+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 1318912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:06.996293+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 1318912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:07.996457+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:08.996644+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:09.996849+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:10.997002+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:11.997158+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1294336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:12.997285+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1294336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:13.997452+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1294336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:14.997586+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1286144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:15.997737+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1286144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:16.997904+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1286144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:17.998077+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 1277952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:18.998260+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 1277952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:19.998448+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1269760 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:20.998567+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1269760 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:21.998739+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 1261568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:22.998934+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 1261568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:23.999153+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 1261568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:24.999439+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1253376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:25.999707+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1253376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:26.999889+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1245184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:28.000132+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1245184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:29.000373+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 1236992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:30.000521+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 1236992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:31.000629+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 1228800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:32.000744+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 1228800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:33.000887+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 1228800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:34.001033+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:35.001214+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:36.001373+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1212416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:37.001514+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1212416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:38.001679+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 1204224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:39.001897+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 1204224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:40.002081+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 1204224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:41.002293+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1196032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:42.002376+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 1187840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:43.002570+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 1187840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:44.002729+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1179648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:45.002895+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1179648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:46.003042+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1171456 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:47.003182+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1171456 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:48.003363+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 1163264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:49.003525+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 1163264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:50.003727+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 1163264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:51.003866+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1155072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:52.004036+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1155072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:53.004205+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 1146880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:54.004366+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 1146880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:55.004535+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1138688 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:56.004720+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1138688 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:57.004885+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 1130496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:58.005070+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 1122304 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:59.005230+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 1122304 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:00.005438+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 1114112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:01.005598+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 1114112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:02.005768+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 1114112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:03.005959+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 1105920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:04.006297+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 1105920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:05.006635+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 1097728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:06.016326+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 1097728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:07.016663+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 1089536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:08.017010+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 1089536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:09.017443+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1081344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:10.017728+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1081344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:11.018000+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1081344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:12.018261+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1073152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:13.018436+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1073152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:14.018641+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 1064960 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:15.018799+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 1064960 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:16.018971+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1056768 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:17.019149+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1056768 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:18.019286+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1056768 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:19.019412+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1048576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:20.019582+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1048576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:21.019777+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1048576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:22.019921+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:23.020076+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:24.020277+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:25.020455+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:26.020583+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:27.020723+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1024000 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:28.020880+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1024000 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:29.021013+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1024000 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:30.021166+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:31.021289+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:32.021392+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 1007616 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:33.021559+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 1007616 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:34.021684+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 999424 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:35.021865+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 999424 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:36.021997+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 991232 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:37.022150+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 991232 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:38.022306+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 991232 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:39.022436+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 983040 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:40.022626+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 983040 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:41.022790+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 974848 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:42.022938+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 974848 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:43.023079+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 974848 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:44.023238+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:45.023407+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:46.023557+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 958464 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:47.023684+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 958464 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:48.023828+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 958464 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:49.023960+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 950272 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:50.024143+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 950272 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:51.024307+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:52.024509+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:53.024642+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:54.024800+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 933888 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:55.024931+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 933888 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:56.025044+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:57.025161+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:58.025284+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:59.025469+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:00.025672+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:01.025830+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:02.026001+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:03.026172+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 892928 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:04.026288+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 892928 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:05.026451+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 884736 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:06.026572+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 892928 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:07.026709+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 892928 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:08.026834+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 884736 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:09.026957+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 884736 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:10.027148+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:11.027279+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:12.027482+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 868352 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:13.027639+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 868352 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:14.027785+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 868352 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:15.027931+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:16.028090+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:17.028207+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:18.028320+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:19.028467+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:20.028565+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 843776 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:21.028679+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 843776 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:22.028855+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 835584 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:23.028999+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 835584 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:24.029185+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 827392 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:25.029419+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 827392 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:26.029617+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 827392 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:27.029738+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 819200 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:28.029937+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 819200 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:29.030060+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 811008 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:30.030666+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 811008 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:31.030851+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 802816 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:32.030990+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 802816 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:33.031133+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 802816 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:34.031304+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 794624 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:35.031388+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 794624 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:36.031538+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 786432 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:37.031681+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 786432 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:38.031826+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 786432 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:39.032011+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 778240 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:40.032197+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 778240 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:41.032319+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 770048 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:42.032443+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 770048 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:43.032570+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 761856 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:44.032738+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 761856 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:45.032871+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 761856 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:46.033022+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 753664 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:47.033236+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 753664 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:48.033425+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 745472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:49.033600+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 745472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:50.033830+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 745472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:51.034049+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 737280 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:52.034287+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 737280 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:53.034437+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 729088 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:54.034617+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 729088 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:55.034737+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 720896 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:56.034899+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 720896 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:57.035048+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 720896 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:58.035230+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 712704 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:59.047566+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 712704 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:00.047735+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 704512 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:01.047865+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 704512 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:02.048065+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 696320 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:03.048227+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 696320 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:04.048426+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 696320 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:05.048544+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76849152 unmapped: 688128 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:06.048791+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76849152 unmapped: 688128 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:07.048995+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 679936 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:08.049158+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 679936 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:09.049317+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:10.049492+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 671744 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:11.049622+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 671744 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:12.049822+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 663552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:13.049968+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 655360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:14.050163+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 655360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:15.050322+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 647168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:16.050501+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 647168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:17.050643+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 638976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:18.050784+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 638976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:19.050995+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 630784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:20.051231+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 630784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:21.051404+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 630784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:22.051549+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 622592 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:23.051690+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 622592 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:24.051879+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 622592 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:25.052050+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 614400 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 6872 writes, 28K keys, 6872 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6872 writes, 1211 syncs, 5.67 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6872 writes, 28K keys, 6872 commit groups, 1.0 writes per commit group, ingest: 19.51 MB, 0.03 MB/s
                                           Interval WAL: 6872 writes, 1211 syncs, 5.67 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:26.052212+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 557056 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:27.052345+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 548864 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:28.052460+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 548864 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:29.052584+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 540672 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:30.052706+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 540672 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:31.052823+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 540672 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:32.052989+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 532480 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:33.053129+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 532480 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:34.053284+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 524288 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:35.053437+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 524288 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:36.053609+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 516096 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:37.053735+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 516096 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:38.053872+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 516096 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:39.054029+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 507904 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:40.054190+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 507904 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:41.054302+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 499712 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:42.054453+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 499712 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:43.054558+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 491520 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:44.054820+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 491520 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:45.054967+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 491520 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:46.055155+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 483328 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:47.055371+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 483328 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:48.055538+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:49.055724+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:50.055940+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 466944 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:51.056099+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 466944 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:52.056263+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 466944 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:53.056388+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 458752 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:54.056538+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 458752 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:55.056696+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 458752 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:56.056904+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 450560 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:57.057073+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 450560 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:58.057212+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 442368 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:59.057403+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 442368 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:00.057577+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 434176 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:01.057741+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 434176 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:02.057871+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 434176 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:03.058048+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 425984 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:04.058216+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 425984 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:05.058352+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 417792 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:06.058521+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 417792 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:07.058672+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 409600 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:08.058813+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 409600 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:09.058971+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 409600 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:10.059177+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:11.059321+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:12.059496+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 393216 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:13.059630+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 393216 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:14.059779+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 385024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:15.059939+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 385024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:16.060193+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 376832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:17.060402+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 376832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:18.060696+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 376832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:19.060921+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 368640 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:20.061203+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 368640 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:21.061347+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 360448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:22.061520+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 360448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:23.061740+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 360448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:24.061902+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 352256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:25.062067+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 352256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:26.062213+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 330.620880127s of 330.648864746s, submitted: 8
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [0,0,1])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:27.062389+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 425984 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:28.062544+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:29.062702+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:30.063523+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:31.063664+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:32.063833+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:33.064014+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:34.064172+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:35.064456+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:36.064645+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:37.064819+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:38.064997+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 393216 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:39.065190+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 393216 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:40.065394+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 385024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:41.065604+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 385024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:42.065778+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 385024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:43.065926+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 376832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:44.068485+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 376832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:45.068625+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 368640 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:46.068759+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 368640 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:47.068893+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 360448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:48.069008+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 360448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:49.069195+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 352256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:50.069389+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 352256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:51.069591+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 352256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:52.069708+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 344064 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:53.069868+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 344064 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:54.070063+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 335872 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:55.070237+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 335872 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:56.070361+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 327680 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:57.070495+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 327680 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:58.070690+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 327680 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:59.070840+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 319488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:00.071048+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 319488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:01.071217+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 311296 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:02.071432+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 311296 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:03.071590+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 303104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:04.071795+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 303104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:05.071990+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 303104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:06.072173+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 294912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:07.072342+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 294912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:08.072444+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:09.072585+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:10.072762+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:11.072957+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 278528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:12.073127+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 278528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:13.073275+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:14.073424+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:15.073617+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:16.073862+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:17.074029+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:18.074188+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:19.074391+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:20.074614+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:21.074737+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 245760 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:22.074873+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 245760 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:23.075016+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:24.075170+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:25.075474+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:26.075642+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:27.075800+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:28.075939+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:29.076056+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:30.076242+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:31.076397+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:32.076541+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:33.076689+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:34.076826+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:35.076954+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:36.077072+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:37.077206+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:38.077373+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:39.077529+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:40.077684+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:41.077810+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:42.077975+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:43.078125+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:44.078239+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:45.078514+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:46.098480+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:47.098728+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:48.098906+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:49.099084+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:50.099366+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:51.099616+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:52.099888+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:53.100138+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:54.100317+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:55.100484+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:56.100616+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:57.100746+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:58.100882+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:59.101019+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:00.101168+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:01.101322+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:02.101488+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:03.101667+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:04.101855+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:05.102061+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:06.102229+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:07.102396+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:08.102575+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:09.102732+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:10.102957+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:11.103147+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:12.103283+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:13.103418+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:14.103540+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:15.103655+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:16.103813+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:17.104958+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:18.105115+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:19.105253+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:20.105710+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:21.106382+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:22.106506+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:23.106651+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:24.106839+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:25.107034+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:26.107199+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:27.107371+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:28.107632+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:29.107923+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:30.108233+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:31.108440+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:32.108554+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:33.108706+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:34.108895+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:35.109065+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:36.109226+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:37.109405+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:38.109556+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:39.109723+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:40.109922+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:41.110090+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:42.110299+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:43.110400+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:44.110540+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:45.110697+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:46.110845+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:47.110994+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:48.111141+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:49.111372+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:50.111667+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:51.111977+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:52.112178+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:53.112359+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:54.112467+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:55.112593+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:56.112813+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:57.112979+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:58.113131+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:59.113474+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:00.113755+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:01.113920+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:02.114218+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:03.114425+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:04.114582+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:05.114709+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:06.114876+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:07.115012+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:08.115135+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:09.115285+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:10.115445+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:11.115616+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:12.115757+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:13.115915+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:14.116025+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:15.116150+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:16.116302+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:17.116448+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:18.116589+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:19.116763+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:20.116959+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:21.117100+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:22.117243+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:23.117394+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:24.117545+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:25.117679+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:26.117793+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:27.117936+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:28.118083+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:29.118206+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:30.118414+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:31.118564+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:32.118695+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:33.118829+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:34.118915+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:35.119017+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:36.119137+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:37.119276+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:38.119417+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:39.119566+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:40.119709+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:41.119852+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:42.119993+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:43.120190+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:44.120404+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:45.120561+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:46.120683+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:47.120872+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:48.121036+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:49.121214+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:50.121381+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:51.121513+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:52.121675+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:53.121821+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:54.122047+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:55.122433+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:56.122552+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:57.122684+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:58.122805+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:59.122939+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:00.123117+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:01.123311+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:02.123585+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:03.123786+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:04.123958+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:05.124139+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:06.125523+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:07.125689+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:08.125919+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:09.126090+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:10.126289+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:11.126390+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:12.126557+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:13.126728+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:14.126857+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:15.126970+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:16.127096+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:17.127287+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:18.127394+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:19.127529+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:20.127724+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:21.127919+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:22.128031+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:23.128163+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:24.128320+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:25.128488+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:26.128645+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:27.128761+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:28.128879+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:29.128991+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:30.129149+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:31.129310+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: mgrc ms_handle_reset ms_handle_reset con 0x564464eddc00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1439243141
Oct 11 04:55:12 compute-0 ceph-osd[88467]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1439243141,v1:192.168.122.100:6801/1439243141]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: get_auth_request con 0x564467e90000 auth_method 0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: mgrc handle_mgr_configure stats_period=5
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 ms_handle_reset con 0x564466850800 session 0x564464e71860
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662eb000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:32.129462+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 999424 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:33.129605+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 999424 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:34.129750+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 999424 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:35.129879+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 999424 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:36.130012+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 999424 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:37.130137+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 991232 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:38.130238+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 991232 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:39.130400+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 991232 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:40.130604+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 991232 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:41.130740+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 991232 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:42.131071+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 983040 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:43.131247+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 983040 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:44.131394+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 983040 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:45.131504+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 983040 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:46.131647+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:47.131816+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:48.131899+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:49.132023+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:50.132172+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:51.132320+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:52.132494+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:53.132640+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:54.132757+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:55.132861+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:56.132988+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:57.133118+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:58.133254+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:59.133469+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:00.133770+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:01.133916+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:02.134163+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:03.134316+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:04.134520+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:05.135105+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:06.135914+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:07.136435+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:08.174655+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:09.174851+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:10.175115+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:11.175294+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:12.175403+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:13.175709+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:14.175901+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:15.176120+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:16.176315+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:17.176563+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:18.176721+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:19.176863+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:20.177039+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:21.177249+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:22.177543+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:23.177664+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:24.177826+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:25.178001+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:26.178137+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:27.178319+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:28.178566+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:29.178733+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:30.178954+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:31.179123+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:32.179248+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:33.179431+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:34.179549+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:35.179718+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:36.179893+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:37.180043+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:38.180207+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:39.180450+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:40.180633+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:41.180828+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:42.180990+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:43.181113+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:44.181271+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:45.181450+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:46.181597+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:47.181702+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:48.181846+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:49.182052+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:50.182256+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:51.182416+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:52.182577+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:53.182726+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:54.183555+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:55.183704+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:56.183877+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:57.184011+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:58.184169+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:59.184414+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:00.184611+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:01.184780+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:02.184940+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:03.185125+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:04.185291+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:05.185442+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:06.185587+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:07.185727+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:08.185886+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:09.186048+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:10.186226+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:11.186388+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:12.186531+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:13.186692+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:14.186867+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:15.187049+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:16.187178+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:17.187306+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:18.187427+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:19.187570+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:20.187733+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:21.187859+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:22.188010+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:23.188163+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:24.188312+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:25.188470+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:26.188625+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:27.188777+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:28.188903+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:29.189058+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:30.189511+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:31.189636+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:32.189783+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:33.189927+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:34.190060+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:35.190186+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:36.190305+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:37.190494+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:38.190689+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:39.190847+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:40.191010+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:41.191165+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:42.191444+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:43.191595+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:44.191735+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:45.191867+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:46.192000+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:47.192166+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:48.192309+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:49.192518+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:50.192732+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:51.192858+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:52.193009+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:53.193178+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:54.193459+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:55.193660+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 11 04:55:12 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:56.193813+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:57.193939+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:58.194081+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:59.194228+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1945847192' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:00.194394+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:01.194597+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:02.194828+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:03.194969+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:04.195108+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:05.195289+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:06.195436+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:07.195631+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:08.195755+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:09.195937+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:10.196193+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:11.196378+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:12.196546+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:13.196766+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:14.196962+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:15.197107+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:16.197264+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:17.197383+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:18.197515+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:19.197632+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:20.197831+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:21.197969+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:22.198125+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:23.198370+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:24.198520+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:25.198678+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:26.198846+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:27.199032+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:28.199170+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:29.199311+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:30.199532+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:31.200405+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:32.200562+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:33.200689+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:34.200814+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:35.200956+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:36.201131+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:37.201288+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:38.201448+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:39.201609+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:40.201767+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:41.201915+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:42.202071+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:43.202263+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:44.202428+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:45.202599+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:46.202751+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:47.202915+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:48.203070+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:49.203247+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:50.203479+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:51.203631+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:52.203751+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:53.203883+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:54.204019+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:55.204165+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:56.204303+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:57.204461+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:58.204602+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:59.204744+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:00.204967+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:01.205110+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:02.205273+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:03.205412+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:04.205613+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:05.205795+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:06.205967+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:07.206123+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:08.206239+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:09.206416+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:10.206660+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:11.206782+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:12.206935+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:13.207490+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:14.207594+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:15.207720+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:16.207849+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:17.207977+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:18.208038+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:19.208144+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:20.208289+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:21.208404+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:22.208516+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:23.208878+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:24.209098+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:25.209256+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:26.209411+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:27.209600+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:28.209764+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:29.209936+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:30.210101+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:31.210386+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:32.210534+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:33.210679+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:34.210816+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:35.210963+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:36.211105+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:37.211214+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:38.211428+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:39.211552+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:40.211715+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:41.211874+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:42.211981+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:43.212101+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:44.212276+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:45.212428+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:46.212548+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:47.212704+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:48.212898+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:49.213046+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:50.213282+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:51.213413+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:52.213582+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:53.213728+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:54.214055+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:55.214314+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:56.214602+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:57.214948+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:58.215409+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:59.215608+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:00.215999+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:01.216244+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:02.216429+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:03.216751+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:04.216950+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:05.217225+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:06.217485+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:07.217867+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:08.218080+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:09.218385+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:10.218708+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:11.218924+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:12.219120+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:13.219375+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:14.219632+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:15.219852+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:16.220428+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:17.221134+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 892928 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:18.221531+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 892928 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:19.222406+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 892928 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:20.222839+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 892928 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:21.223268+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 892928 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:22.223424+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 892928 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:23.223636+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 892928 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:24.223932+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 884736 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:25.224225+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 884736 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 7052 writes, 29K keys, 7052 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7052 writes, 1301 syncs, 5.42 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 273 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:26.224490+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:27.224797+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:28.225030+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:29.225266+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:30.225521+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:31.225744+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:32.225977+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:33.226299+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:34.226592+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:35.226901+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:36.227130+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:37.227497+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:38.227753+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:39.227976+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:40.228231+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:41.228448+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:42.228607+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:43.228759+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:44.228976+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:45.229121+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:46.229280+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:47.229459+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 835584 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:48.229626+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 835584 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:49.229755+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 835584 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:50.230431+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 835584 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:51.230546+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:52.230757+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:53.230909+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:54.231051+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:55.231180+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:56.231379+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:57.231539+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:58.231695+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:59.231802+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:00.231987+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:01.232161+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:02.232437+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:03.232579+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:04.232737+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:05.232902+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:06.233065+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:07.233244+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:08.233413+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:09.233632+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:10.233899+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:11.234070+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:12.234261+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:13.234420+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:14.234650+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:15.234817+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:16.235010+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:17.235200+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:18.235419+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:19.235593+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:20.235793+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:21.235939+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:22.236083+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:23.236239+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:24.236405+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:25.236640+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:26.236857+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 599.830261230s of 600.152160645s, submitted: 90
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:27.237027+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:28.237194+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:29.237436+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:30.237636+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:31.237820+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:32.237977+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:33.238105+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:34.238250+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:35.238389+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:36.238561+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:37.238740+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:38.238854+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:39.239007+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:40.239218+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:41.239401+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:42.239585+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:43.239723+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:44.239841+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:45.240030+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:46.240221+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:47.240401+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:48.240572+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:49.240728+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:50.240911+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:51.241074+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:52.241232+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:53.241389+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:54.241543+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:55.241723+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:56.241951+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:57.242164+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:58.242641+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:59.242779+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:00.242970+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:01.243104+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:02.243252+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:03.243379+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:04.243522+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:05.243749+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:06.243938+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:07.244137+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:08.244372+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:09.244555+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:10.244866+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:11.245051+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:12.245264+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:13.245568+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:14.245761+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:15.245961+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:16.246141+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:17.246387+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:18.247058+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:19.247241+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:20.247452+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:21.247652+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:22.247819+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:23.247984+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:24.248134+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:25.248295+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:26.248430+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:27.248597+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:28.248759+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:29.248939+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:30.249105+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:31.249248+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:32.249433+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:33.249555+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:34.249710+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:35.249846+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:36.250009+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:37.250160+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:38.250316+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:39.250499+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:40.250673+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:41.250876+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1097728 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:42.251011+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1097728 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:43.251179+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1097728 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:44.251435+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1097728 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:45.251602+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1097728 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:46.251799+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1097728 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:47.251973+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1097728 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:48.252125+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1097728 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:49.252248+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1097728 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:50.252410+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1097728 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:51.252555+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:52.252669+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:53.252829+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:54.253010+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:55.253155+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:56.253423+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:57.253979+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:58.255024+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:59.255533+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:00.255791+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:01.256692+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:02.256944+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:03.257760+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:04.258429+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:05.258945+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:06.259394+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:07.259762+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:08.259897+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:09.260028+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:10.260218+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:11.260409+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:12.260528+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:13.260747+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:14.260903+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:15.261037+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:16.261173+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:17.261307+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:18.261457+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:19.261602+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:20.261801+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:21.262033+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:22.262179+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:23.262407+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:24.262579+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:25.262754+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:26.262937+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:27.263071+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:28.263184+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:29.263391+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:30.263656+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:31.263878+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:32.264093+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:33.264363+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:34.264551+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:35.264739+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:36.264910+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:37.265105+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:38.265241+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:39.265455+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:40.265708+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:41.265995+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:42.268500+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:43.268723+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:44.268951+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:45.269188+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:46.269454+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:47.269693+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:48.269873+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:49.270077+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:50.270390+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:51.270591+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:52.270790+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:53.271027+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:54.271179+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:55.271419+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:56.271688+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:57.271829+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:58.272011+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:59.272194+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:00.272395+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:01.272567+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:02.273818+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:03.274885+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:04.275580+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:05.276395+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:06.276602+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:07.277264+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:08.277827+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:09.278487+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:10.279012+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:11.279176+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:12.279576+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:13.279973+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:14.280285+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:15.280658+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:16.280808+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:17.280982+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:18.281125+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:19.281290+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:20.281500+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:21.281702+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:22.281837+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:23.281969+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:24.282285+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:25.282545+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:26.282694+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:27.282830+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:28.283096+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:29.283299+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:30.283605+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:31.283846+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:32.284089+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:33.284389+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:34.284589+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:35.284777+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:36.284959+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:37.285235+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:38.285499+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:39.285699+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:40.285940+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:41.286129+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:42.286285+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:43.286493+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:44.286639+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:45.286827+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:46.287012+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:47.287229+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:48.287454+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:49.287680+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:50.287880+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:51.288089+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:52.288373+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:53.288594+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:54.288774+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:55.288949+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:56.289138+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:57.289322+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:58.289547+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:59.289750+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:00.289997+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:01.290167+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:02.290438+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:03.290623+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:04.290883+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:05.291133+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:06.291323+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:07.291585+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:08.291780+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:09.292061+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:10.292269+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:11.292463+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:12.292664+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:13.292831+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:14.293026+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:15.293254+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:16.293448+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:17.293630+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:18.293808+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:19.294047+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:20.294306+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:21.294581+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:22.294752+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:23.294922+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:24.295078+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:25.295231+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466850800
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 239.403076172s of 239.700897217s, submitted: 90
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:26.295406+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848401 data_alloc: 218103808 data_used: 188416
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:27.295537+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 17653760 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 116 heartbeat osd_stat(store_statfs(0x4fc24e000/0x0/0x4ffc00000, data 0x9206b1/0x9ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 116 handle_osd_map epochs [117,117], i have 116, src has [1,117]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 117 ms_handle_reset con 0x564466850800 session 0x56446803d4a0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:28.295678+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467a4f400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 16588800 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:29.295814+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 118 ms_handle_reset con 0x564467a4f400 session 0x564467ed45a0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:30.295979+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fbdd9000/0x0/0x4ffc00000, data 0xd93de3/0xe44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:31.296124+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946426 data_alloc: 218103808 data_used: 196608
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:32.296459+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fbdd9000/0x0/0x4ffc00000, data 0xd93de3/0xe44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:33.296643+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fbdd9000/0x0/0x4ffc00000, data 0xd93de3/0xe44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:34.296820+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:35.297016+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:36.297176+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949560 data_alloc: 218103808 data_used: 200704
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:37.297435+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:38.297610+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:39.297800+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:40.298003+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:41.298147+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949560 data_alloc: 218103808 data_used: 200704
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:42.298294+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:43.298395+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:44.298551+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:45.298688+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:46.298809+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949560 data_alloc: 218103808 data_used: 200704
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:47.298979+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:48.299141+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:49.299300+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:50.299536+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:51.299719+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949560 data_alloc: 218103808 data_used: 200704
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:52.299892+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:53.300014+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:54.300138+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:55.300277+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:56.300378+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949560 data_alloc: 218103808 data_used: 200704
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:57.300526+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:58.300680+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:59.300860+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:00.301137+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:01.301321+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949560 data_alloc: 218103808 data_used: 200704
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:02.301599+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:03.301730+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467a4f800
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.151039124s of 38.300365448s, submitted: 47
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:04.301865+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 120 ms_handle_reset con 0x564467a4f800 session 0x564467ed4960
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 16457728 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:05.301961+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 121 ms_handle_reset con 0x564466465000 session 0x5644662d7e00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 16449536 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:06.302123+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966471 data_alloc: 218103808 data_used: 212992
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 122 ms_handle_reset con 0x564466465c00 session 0x564467ed45a0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 16367616 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466850800
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:07.302383+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 123 ms_handle_reset con 0x564466850800 session 0x564468a761e0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 16277504 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467a4f400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:08.302516+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 124 ms_handle_reset con 0x564467a4f400 session 0x564468a77e00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 16220160 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fbdbc000/0x0/0x4ffc00000, data 0xd9fdea/0xe61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467a4fc00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:09.302668+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 125 ms_handle_reset con 0x564467a4fc00 session 0x564468a8da40
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16138240 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:10.302913+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fbdb7000/0x0/0x4ffc00000, data 0xda198a/0xe65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16138240 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:11.303187+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995095 data_alloc: 218103808 data_used: 221184
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16138240 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:12.303498+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16138240 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:13.303681+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 126 ms_handle_reset con 0x564466465000 session 0x564468aacd20
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 16113664 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 126 ms_handle_reset con 0x564466465c00 session 0x564468a8da40
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:14.303880+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466850800
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.721569061s of 10.287688255s, submitted: 141
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 16211968 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:15.304052+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 127 ms_handle_reset con 0x564466850800 session 0x564468aad0e0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467a4f400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 16097280 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fbdb3000/0x0/0x4ffc00000, data 0xda4279/0xe68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 128 ms_handle_reset con 0x564467a4f400 session 0x56446882e5a0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:16.304408+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 128 ms_handle_reset con 0x5644662ea000 session 0x564468ab1e00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001955 data_alloc: 218103808 data_used: 241664
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 16056320 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fbdb0000/0x0/0x4ffc00000, data 0xda5836/0xe69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 129 ms_handle_reset con 0x564466465c00 session 0x5644682490e0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466850800
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 129 ms_handle_reset con 0x564466850800 session 0x564467d361e0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:17.304777+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 129 ms_handle_reset con 0x5644662ea000 session 0x564467ed41e0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467a4f400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 24387584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:18.304902+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 130 ms_handle_reset con 0x564467a4f400 session 0x564468ab1a40
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465a70400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 24305664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 131 ms_handle_reset con 0x564465679400 session 0x56446882ef00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 131 ms_handle_reset con 0x564466465000 session 0x564467f741e0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:19.305066+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 131 ms_handle_reset con 0x564465679400 session 0x564468becf00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 131 ms_handle_reset con 0x564465a70400 session 0x564468ab0780
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 131 ms_handle_reset con 0x5644662ea000 session 0x564468bf30e0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 24264704 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:20.305295+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466850800
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 24248320 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:21.305489+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022289 data_alloc: 218103808 data_used: 258048
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 133 ms_handle_reset con 0x564466850800 session 0x564468bed0e0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 133 ms_handle_reset con 0x564466465c00 session 0x564468bf34a0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 22044672 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465a70400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:22.305657+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb999000/0x0/0x4ffc00000, data 0xdadab8/0xe72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 134 ms_handle_reset con 0x564465679400 session 0x564468c0e000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 22011904 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:23.305814+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 135 ms_handle_reset con 0x564465a70400 session 0x564468c063c0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 21946368 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:24.306054+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 21946368 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:25.306237+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.401144028s of 11.352249146s, submitted: 362
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 21905408 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:26.306401+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028392 data_alloc: 218103808 data_used: 262144
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 137 ms_handle_reset con 0x5644662ea000 session 0x564468c0e780
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 21897216 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:27.306590+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 21897216 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:28.306758+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fb98e000/0x0/0x4ffc00000, data 0xdb4954/0xe7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 21897216 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:29.306895+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fb98e000/0x0/0x4ffc00000, data 0xdb4954/0xe7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 138 ms_handle_reset con 0x564466465000 session 0x564468c0eb40
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 21897216 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:30.307049+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82935808 unmapped: 21880832 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:31.307197+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043685 data_alloc: 218103808 data_used: 274432
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 140 ms_handle_reset con 0x564465679400 session 0x564468c0f2c0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465a70400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 140 ms_handle_reset con 0x564465a70400 session 0x564468f20d20
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 21864448 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:32.307315+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 21831680 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 142 ms_handle_reset con 0x564466465c00 session 0x564467ccb4a0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 142 ms_handle_reset con 0x5644662ea000 session 0x564468f20f00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:33.307509+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467a4f400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 142 ms_handle_reset con 0x564467a4f400 session 0x564468f210e0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 21823488 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:34.307695+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 21823488 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:35.307828+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdbd38a/0xe8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 21823488 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:36.307954+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdbd38a/0xe8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051758 data_alloc: 218103808 data_used: 274432
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 21823488 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:37.308163+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 21823488 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:38.308364+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 21823488 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:39.308507+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 21823488 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:40.308683+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.345084190s of 14.652987480s, submitted: 90
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 21815296 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:41.308858+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053692 data_alloc: 218103808 data_used: 274432
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb97c000/0x0/0x4ffc00000, data 0xdbee25/0xe91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 143 ms_handle_reset con 0x564465679400 session 0x564468f214a0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 21815296 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:42.308989+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465a70400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 143 ms_handle_reset con 0x564465a70400 session 0x564468f21680
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 21815296 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:43.309116+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 143 ms_handle_reset con 0x5644662ea000 session 0x564468f21860
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 21815296 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:44.309283+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 21815296 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 145 ms_handle_reset con 0x564466465c00 session 0x564468f21a40
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:45.309516+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465a70800
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465a70000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fb979000/0x0/0x4ffc00000, data 0xdc09a2/0xe94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 21495808 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:46.309647+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065226 data_alloc: 218103808 data_used: 278528
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 21495808 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:47.309785+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 21495808 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:48.309911+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fb950000/0x0/0x4ffc00000, data 0xde6583/0xebc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 21495808 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:49.310061+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467fc2000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 21487616 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564468a88000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 146 ms_handle_reset con 0x564467fc2000 session 0x564465cfe780
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:50.310244+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb94d000/0x0/0x4ffc00000, data 0xde817a/0xec0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 21487616 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.499062538s of 10.602461815s, submitted: 55
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 147 ms_handle_reset con 0x564468a88000 session 0x564468a77a40
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:51.310378+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072974 data_alloc: 218103808 data_used: 282624
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 21487616 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467fc2000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:52.310501+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 147 ms_handle_reset con 0x564467fc2000 session 0x5644659aa780
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 21479424 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 147 ms_handle_reset con 0x564465679400 session 0x564467fe0000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465a70400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:53.310600+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xde9cf7/0xec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 21463040 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 147 ms_handle_reset con 0x564465a70400 session 0x5644675abc20
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:54.310708+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 21463040 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:55.310887+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 21716992 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xdeb886/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:56.311025+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 149 ms_handle_reset con 0x5644662ea000 session 0x5644679674a0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076434 data_alloc: 218103808 data_used: 282624
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467d5b400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 21725184 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:57.311200+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 21708800 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 150 ms_handle_reset con 0x564467d5b400 session 0x564466798000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:58.311300+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 21733376 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb93e000/0x0/0x4ffc00000, data 0xdef46e/0xece000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:59.311455+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 21733376 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:00.311611+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb93e000/0x0/0x4ffc00000, data 0xdef46e/0xece000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 21725184 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:01.311764+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088920 data_alloc: 218103808 data_used: 286720
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb93c000/0x0/0x4ffc00000, data 0xdf0f09/0xed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 21725184 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x5644662ea000 session 0x564468bf3860
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:02.311894+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.009598732s of 11.205535889s, submitted: 72
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x564465679400 session 0x56446882ed20
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465a70400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x564465a70400 session 0x564468bec960
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467fc2000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x564467fc2000 session 0x564468248f00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467fc2000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x564467fc2000 session 0x564467ed52c0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x564465679400 session 0x564468a23680
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 21741568 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x5644662ea000 session 0x564468249c20
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:03.312013+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467d5b400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x564467d5b400 session 0x564468aacf00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 21741568 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x564466465c00 session 0x564468c0f4a0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb93c000/0x0/0x4ffc00000, data 0xdf0f09/0xed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:04.312137+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x564466465c00 session 0x564468a22f00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 21438464 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:05.312416+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 21438464 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:06.312571+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093274 data_alloc: 218103808 data_used: 299008
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 21438464 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:07.312736+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb918000/0x0/0x4ffc00000, data 0xe14f19/0xef6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 21438464 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:08.312880+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467d5b400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x564467d5b400 session 0x5644682481e0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467fc2000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 152 ms_handle_reset con 0x564467fc2000 session 0x564465a652c0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564468be4000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564468be4400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 152 ms_handle_reset con 0x564468be4000 session 0x564467cd45a0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 152 ms_handle_reset con 0x564468be4400 session 0x56446882e1e0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564468be4400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 152 ms_handle_reset con 0x564468be4400 session 0x564467cce960
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 21446656 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:09.312991+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 152 ms_handle_reset con 0x564466465c00 session 0x56446561ef00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467d5b400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fb90f000/0x0/0x4ffc00000, data 0xe18667/0xefc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 21430272 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:10.313120+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 154 ms_handle_reset con 0x564467d5b400 session 0x56446561e3c0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 21430272 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:11.313268+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467fc2000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107114 data_alloc: 218103808 data_used: 307200
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 154 ms_handle_reset con 0x564467fc2000 session 0x5644680e8f00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564468be4000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 154 ms_handle_reset con 0x564468be4000 session 0x564467d37680
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 154 ms_handle_reset con 0x564466465c00 session 0x56446678bc20
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 21381120 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:12.313411+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467d5b400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.190019608s of 10.323541641s, submitted: 41
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 154 ms_handle_reset con 0x564467d5b400 session 0x56446678be00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 21364736 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:13.313512+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467fc2000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 155 ms_handle_reset con 0x564467fc2000 session 0x5644679365a0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 21323776 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:14.313633+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fb90f000/0x0/0x4ffc00000, data 0xe1a200/0xeff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 21323776 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 155 ms_handle_reset con 0x564465679400 session 0x564468c07860
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 155 ms_handle_reset con 0x5644662ea000 session 0x564468bf34a0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:15.313793+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 157 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 21291008 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:16.314079+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 157 ms_handle_reset con 0x564465679400 session 0x564468aad0e0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112227 data_alloc: 218103808 data_used: 315392
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 157 heartbeat osd_stat(store_statfs(0x4fb92a000/0x0/0x4ffc00000, data 0xdfb40a/0xee2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 21274624 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:17.314600+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 21250048 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:18.314999+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 157 heartbeat osd_stat(store_statfs(0x4fb92a000/0x0/0x4ffc00000, data 0xdfb40a/0xee2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 21217280 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:19.315236+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 157 ms_handle_reset con 0x564465a70800 session 0x564468f21e00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 157 ms_handle_reset con 0x564465a70000 session 0x564468201e00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 21217280 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:20.315668+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 158 ms_handle_reset con 0x5644662ea000 session 0x564468bf3e00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:21.315992+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111632 data_alloc: 218103808 data_used: 315392
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:22.316315+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 158 ms_handle_reset con 0x564466465c00 session 0x564468bed4a0
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.668135643s of 10.053568840s, submitted: 175
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:23.316624+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 159 ms_handle_reset con 0x564465679400 session 0x564468aad860
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 159 heartbeat osd_stat(store_statfs(0x4fb94e000/0x0/0x4ffc00000, data 0xdd8e4d/0xebf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:24.316885+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:25.317174+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:26.317407+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113726 data_alloc: 218103808 data_used: 311296
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 159 heartbeat osd_stat(store_statfs(0x4fb94b000/0x0/0x4ffc00000, data 0xddaa1e/0xec2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:27.317623+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:28.317824+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:29.317982+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:30.318171+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:31.318661+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113726 data_alloc: 218103808 data_used: 311296
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:32.319299+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:33.319868+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:34.320036+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:35.320591+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:36.320981+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:37.321199+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:38.321418+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:39.321643+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:40.321871+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:41.322024+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:42.322197+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:43.322785+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:44.322907+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:45.323070+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:46.323280+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:47.323393+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:48.323538+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:49.323733+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:50.323885+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:51.324027+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:52.324209+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:53.324408+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:54.324562+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:55.324730+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:56.324911+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:57.325111+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:58.325232+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:59.325402+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:00.325537+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:01.325666+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:02.325813+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:03.325992+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:04.326156+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:05.326383+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:06.326541+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:07.326720+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:08.326885+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:09.327077+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:10.327278+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:11.327433+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:12.327620+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:13.327817+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:14.327977+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:15.328119+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:16.328324+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:17.328562+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:18.328764+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:19.328929+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:20.329433+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:21.329911+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:22.330147+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:23.330912+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:24.331686+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:25.332124+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:26.332279+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:27.332452+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:28.335494+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:29.335677+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:30.335881+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:31.336027+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:32.336162+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:33.336285+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:34.336590+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:35.336734+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:36.336853+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:12 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:12 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:37.336989+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:38.337111+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:39.337263+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 04:55:12 compute-0 ceph-osd[88467]: do_command 'config diff' '{prefix=config diff}'
Oct 11 04:55:12 compute-0 ceph-osd[88467]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 11 04:55:12 compute-0 ceph-osd[88467]: do_command 'config show' '{prefix=config show}'
Oct 11 04:55:12 compute-0 ceph-osd[88467]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 20684800 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: do_command 'counter dump' '{prefix=counter dump}'
Oct 11 04:55:12 compute-0 ceph-osd[88467]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 11 04:55:12 compute-0 ceph-osd[88467]: do_command 'counter schema' '{prefix=counter schema}'
Oct 11 04:55:12 compute-0 ceph-osd[88467]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:40.337438+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 20512768 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 04:55:12 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:41.337556+0000)
Oct 11 04:55:12 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20463616 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:12 compute-0 ceph-osd[88467]: do_command 'log dump' '{prefix=log dump}'
Oct 11 04:55:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 11 04:55:12 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/350264489' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 11 04:55:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 11 04:55:12 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1167568883' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 11 04:55:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 11 04:55:12 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1409743120' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 11 04:55:12 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 11 04:55:12 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1122222715' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 11 04:55:13 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1945847192' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 11 04:55:13 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/350264489' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 11 04:55:13 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1167568883' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 11 04:55:13 compute-0 ceph-mon[74243]: pgmap v1029: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:13 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1409743120' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 11 04:55:13 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1122222715' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 11 04:55:13 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 11 04:55:13 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3133784676' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 11 04:55:13 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 11 04:55:13 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2958417708' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 11 04:55:13 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14863 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:13 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14865 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:14 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3133784676' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 11 04:55:14 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2958417708' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 11 04:55:14 compute-0 ceph-mon[74243]: from='client.14863 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:14 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14867 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:14 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14869 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:14 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14871 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:14 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14875 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 11 04:55:15 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/84483454' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 11 04:55:15 compute-0 ceph-mon[74243]: from='client.14865 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:15 compute-0 ceph-mon[74243]: from='client.14867 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:15 compute-0 ceph-mon[74243]: from='client.14869 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:15 compute-0 ceph-mon[74243]: from='client.14871 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:15 compute-0 ceph-mon[74243]: from='client.14875 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:15 compute-0 ceph-mon[74243]: pgmap v1030: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:15 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/84483454' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 11 04:55:15 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14879 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:15 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14883 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 11 04:55:15 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1046556030' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 11 04:55:15 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14885 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 11 04:55:15 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1021474989' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 11 04:55:16 compute-0 ceph-mon[74243]: from='client.14879 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:16 compute-0 ceph-mon[74243]: from='client.14883 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:16 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1046556030' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 11 04:55:16 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1021474989' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 11 04:55:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 11 04:55:16 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2223956878' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 11 04:55:16 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 11 04:55:16 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 62 pg[6.d( v 38'39 lc 34'13 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/53 les/c/f=62/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.012870 4 0.000708
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 62 pg[6.d( v 38'39 lc 34'13 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/53 les/c/f=62/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 62 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=61/62 n=2 ec=47/21 lis/c=61/53 les/c/f=62/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=38'39 mlcod 38'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.069563 3 0.000130
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 62 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=61/62 n=2 ec=47/21 lis/c=61/53 les/c/f=62/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=38'39 mlcod 38'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 62 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=61/62 n=2 ec=47/21 lis/c=61/53 les/c/f=62/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=38'39 mlcod 38'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000021 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 62 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=61/62 n=2 ec=47/21 lis/c=61/53 les/c/f=62/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=38'39 mlcod 38'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 62 pg[6.d( v 38'39 lc 34'13 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/53 les/c/f=62/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.061728 2 0.000134
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 62 pg[6.d( v 38'39 lc 34'13 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/53 les/c/f=62/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 62 pg[6.d( v 38'39 lc 34'13 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/53 les/c/f=62/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000010 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 62 pg[6.d( v 38'39 lc 34'13 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/53 les/c/f=62/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 62 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/53 les/c/f=62/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=38'39 mlcod 38'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.140097 1 0.000119
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 62 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/53 les/c/f=62/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=38'39 mlcod 38'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 62 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/53 les/c/f=62/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=38'39 mlcod 38'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000018 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 62 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/53 les/c/f=62/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=38'39 mlcod 38'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:55.879274+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 647353 data_alloc: 218103808 data_used: 40960
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 999424 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:56.879543+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 1024000 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:57.879723+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1015808 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 62 handle_osd_map epochs [63,64], i have 62, src has [1,64]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.985850334s of 10.225222588s, submitted: 98
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 10.214218 15 0.000070
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active 10.223853 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary 11.237412 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 10.213833 15 0.000162
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active 10.223698 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary 11.237629 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] exit Started 11.237663 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] exit Started 11.237467 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 10.214693 15 0.000069
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active 10.223753 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary 11.237428 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] exit Started 11.237525 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.785065651s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 active pruub 111.256408691s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.785070419s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 active pruub 111.256507874s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784944534s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256408691s@ mbc={}] exit Reset 0.000193 1 0.000731
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784944534s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256408691s@ mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784944534s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256408691s@ mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784944534s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256408691s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.785003662s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256507874s@ mbc={}] exit Reset 0.000117 1 0.000200
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784944534s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256408691s@ mbc={}] exit Start 0.000016 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.785003662s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256507874s@ mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784944534s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256408691s@ mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.785003662s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256507874s@ mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.785003662s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256507874s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.785003662s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256507874s@ mbc={}] exit Start 0.000012 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.785003662s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256507874s@ mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.785189629s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 active pruub 111.256446838s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 10.214863 15 0.000279
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active 10.223880 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary 11.237829 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] exit Started 11.237847 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784698486s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 active pruub 111.256660461s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784675598s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256660461s@ mbc={}] exit Reset 0.000046 1 0.000101
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784675598s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256660461s@ mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784675598s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256660461s@ mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784675598s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256660461s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784675598s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256660461s@ mbc={}] exit Start 0.000019 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784675598s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256660461s@ mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784265518s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256446838s@ mbc={}] exit Reset 0.000975 1 0.001339
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784265518s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256446838s@ mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784265518s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256446838s@ mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784265518s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256446838s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784265518s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256446838s@ mbc={}] exit Start 0.000157 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 64 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64 pruub=13.784265518s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 111.256446838s@ mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 64 handle_osd_map epochs [61,64], i have 64, src has [1,64]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe106000/0x0/0x4ffc00000, data 0x544a2/0xc6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:58.879957+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1064960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023777 3 0.000084
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.023824 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022767 3 0.000492
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023863 3 0.000925
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.023125 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.023911 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023403 3 0.000161
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.023460 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=64) [2] r=-1 lpr=64 pi=[57,64)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Reset 0.000076 1 0.000095
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Start 0.000008 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Reset 0.000606 1 0.000642
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Start 0.000009 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000590 1 0.000603
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Reset 0.000761 1 0.000788
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000026 1 0.000085
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Reset 0.000823 1 0.000855
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000126 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000138 1 0.000069
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Start 0.000014 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000035 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000065 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000079 1 0.000095
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000036 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 65 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:59.880197+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1040384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 65 handle_osd_map epochs [66,66], i have 66, src has [1,66]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999679 4 0.000728
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000465 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000365 4 0.000245
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000660 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000500 4 0.000103
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000624 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000776 4 0.000184
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.001544 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=47) [0] r=0 lpr=47 crt=38'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 26.506730 53 0.000151
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=47) [0] r=0 lpr=47 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 26.515971 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=47) [0] r=0 lpr=47 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 26.516109 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=47) [0] r=0 lpr=47 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 26.516171 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=47) [0] r=0 lpr=47 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=13.493541718s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 112.991004944s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=13.493486404s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.991004944s@ mbc={}] exit Reset 0.000096 1 0.000134
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=13.493486404s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.991004944s@ mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=13.493486404s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.991004944s@ mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=13.493486404s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.991004944s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=13.493486404s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.991004944s@ mbc={}] exit Start 0.000015 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=13.493486404s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.991004944s@ mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.026335 5 0.000628
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.026390 5 0.000255
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.026644 5 0.000284
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000107 1 0.000079
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.027257 5 0.000817
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.050553 1 0.000021
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.057104 2 0.000166
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.107837 1 0.000058
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.047164 1 0.000091
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.050572 2 0.000167
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.205753 1 0.000035
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.053441 1 0.000060
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.049083 2 0.000186
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.307846 1 0.000067
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.052552 1 0.000075
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.032892 2 0.000138
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 66 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:00.880453+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 659689 data_alloc: 218103808 data_used: 61440
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 950272 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.790515 1 0.000247
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active 1.022914 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary 2.024485 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started 2.024518 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.602332 1 0.000111
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active 1.023269 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary 2.023766 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started 2.023811 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.003335953s) [2] async=[2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 41'581 active pruub 115.523399353s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.003097534s) [2] async=[2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 41'581 active pruub 115.523208618s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.003224373s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523399353s@ mbc={}] exit Reset 0.000161 1 0.000239
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.003224373s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523399353s@ mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.003015518s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523208618s@ mbc={}] exit Reset 0.000129 1 0.000191
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.003224373s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523399353s@ mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.003015518s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523208618s@ mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.003224373s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523399353s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.003015518s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523208618s@ mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.003015518s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523208618s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.003224373s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523399353s@ mbc={}] exit Start 0.000020 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.003015518s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523208618s@ mbc={}] exit Start 0.000014 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.003224373s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523399353s@ mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.003015518s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523208618s@ mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.890368 1 0.000244
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active 1.025031 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary 2.025687 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started 2.025759 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.001225471s) [2] async=[2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 41'581 active pruub 115.523338318s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.690147 1 0.000127
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active 1.025440 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.001125336s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523338318s@ mbc={}] exit Reset 0.000331 1 0.000404
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary 2.026126 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.001125336s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523338318s@ mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started 2.026162 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.001125336s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523338318s@ mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.001125336s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523338318s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.001125336s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523338318s@ mbc={}] exit Start 0.000021 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.001125336s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523338318s@ mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.001211166s) [2] async=[2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 41'581 active pruub 115.523513794s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.001120567s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523513794s@ mbc={}] exit Reset 0.000136 1 0.000195
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.001120567s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523513794s@ mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.001120567s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523513794s@ mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.001120567s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523513794s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.001120567s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523513794s@ mbc={}] exit Start 0.000019 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67 pruub=15.001120567s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 115.523513794s@ mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=-1 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.039817 7 0.000155
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=-1 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=-1 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=-1 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000110 1 0.000067
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=-1 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[6.8( v 38'39 (0'0,38'39] lb MIN local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=-1 lpr=66 DELETING pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.004153 1 0.000155
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[6.8( v 38'39 (0'0,38'39] lb MIN local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=-1 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.004356 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 67 pg[6.8( v 38'39 (0'0,38'39] lb MIN local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=-1 lpr=66 pi=[47,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.044244 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.1e deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.1e deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:01.880609+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:31.801241+0000 osd.0 (osd.0) 28 : cluster [DBG] 4.1e deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:31.815375+0000 osd.0 (osd.0) 29 : cluster [DBG] 4.1e deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 1859584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 29) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:31.801241+0000 osd.0 (osd.0) 28 : cluster [DBG] 4.1e deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:31.815375+0000 osd.0 (osd.0) 29 : cluster [DBG] 4.1e deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.408168 6 0.000247
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.405977 6 0.000220
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.408288 6 0.000214
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.406120 6 0.000219
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001644 2 0.000083
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.17( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: not registered w/ OSD
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.002055 2 0.000075
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: not registered w/ OSD
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.002083 2 0.000149
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.7( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: not registered w/ OSD
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001546 2 0.001247
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.f( v 41'581 (0'0,41'581] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: not registered w/ OSD
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.17( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.066139 2 0.000462
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.17( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.068055 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.17( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.476297 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: not registered w/ OSD
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:02.880833+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:32.773188+0000 osd.0 (osd.0) 30 : cluster [DBG] 4.1f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:32.787581+0000 osd.0 (osd.0) 31 : cluster [DBG] 4.1f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.1f( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.110492 2 0.000303
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.1f( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.112654 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.1f( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=6 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.518727 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: not registered w/ OSD
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.7( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.154685 2 0.000187
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.7( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.156887 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.7( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.565296 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: not registered w/ OSD
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.f( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.206503 2 0.000152
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.f( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.208470 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 68 pg[9.f( v 41'581 (0'0,41'581] lb MIN local-lis/les=65/66 n=7 ec=49/34 lis/c=65/57 les/c/f=66/58/0 sis=67) [2] r=-1 lpr=67 pi=[57,67)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.615161 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: not registered w/ OSD
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe0f6000/0x0/0x4ffc00000, data 0x5d3d4/0xd5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 1744896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 68 handle_osd_map epochs [68,69], i have 68, src has [1,69]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 31) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:32.773188+0000 osd.0 (osd.0) 30 : cluster [DBG] 4.1f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:32.787581+0000 osd.0 (osd.0) 31 : cluster [DBG] 4.1f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:03.881042+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 1744896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:04.881274+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:34.831660+0000 osd.0 (osd.0) 32 : cluster [DBG] 5.1e scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:34.845734+0000 osd.0 (osd.0) 33 : cluster [DBG] 5.1e scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 1728512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 33) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:34.831660+0000 osd.0 (osd.0) 32 : cluster [DBG] 5.1e scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:34.845734+0000 osd.0 (osd.0) 33 : cluster [DBG] 5.1e scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 70 heartbeat osd_stat(store_statfs(0x4fe0f2000/0x0/0x4ffc00000, data 0x620ab/0xd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:05.881573+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 1 last_log 34 sent 33 num 1 unsent 1 sending 1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:35.871117+0000 osd.0 (osd.0) 34 : cluster [DBG] 2.19 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 631640 data_alloc: 218103808 data_used: 40960
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 1695744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 34) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:35.871117+0000 osd.0 (osd.0) 34 : cluster [DBG] 2.19 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:06.881793+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 1 last_log 35 sent 34 num 1 unsent 1 sending 1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:35.885242+0000 osd.0 (osd.0) 35 : cluster [DBG] 2.19 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 1761280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 35) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:35.885242+0000 osd.0 (osd.0) 35 : cluster [DBG] 2.19 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:07.882011+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:36.891972+0000 osd.0 (osd.0) 36 : cluster [DBG] 2.18 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:36.906115+0000 osd.0 (osd.0) 37 : cluster [DBG] 2.18 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 1744896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 37) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:36.891972+0000 osd.0 (osd.0) 36 : cluster [DBG] 2.18 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:36.906115+0000 osd.0 (osd.0) 37 : cluster [DBG] 2.18 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:08.882243+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.414249420s of 10.618768692s, submitted: 61
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 1703936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55661220f400
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 71 pg[6.9(unlocked)] enter Initial
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 71 pg[6.9( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=0 pi=[53,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000082 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 71 pg[6.9( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=0 pi=[53,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 71 pg[6.9( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000028
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 71 pg[6.9( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 71 pg[6.9( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 71 pg[6.9( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 71 pg[6.9( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 71 pg[6.9( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 71 pg[6.9( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 71 pg[6.9( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 71 pg[6.9( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000247 1 0.000063
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 71 pg[6.9( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 71 handle_osd_map epochs [71,71], i have 71, src has [1,71]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 71 pg[6.9( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=38'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001048 2 0.000090
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 71 pg[6.9( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=38'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 71 pg[6.9( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=38'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000018 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 71 pg[6.9( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=38'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 71 heartbeat osd_stat(store_statfs(0x4fe0f5000/0x0/0x4ffc00000, data 0x620ab/0xd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 71 ms_handle_reset con 0x55661220f400 session 0x556611d21e00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:09.882414+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:38.930748+0000 osd.0 (osd.0) 38 : cluster [DBG] 10.9 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:38.948429+0000 osd.0 (osd.0) 39 : cluster [DBG] 10.9 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 1105920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55661220ec00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 39) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:38.930748+0000 osd.0 (osd.0) 38 : cluster [DBG] 10.9 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:38.948429+0000 osd.0 (osd.0) 39 : cluster [DBG] 10.9 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 71 handle_osd_map epochs [71,72], i have 71, src has [1,72]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 71 handle_osd_map epochs [72,72], i have 72, src has [1,72]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.9( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=38'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.013511 2 0.000092
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.9( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=38'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.014910 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.9( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=38'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=71/72 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=71/72 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=71/72 n=1 ec=47/21 lis/c=71/53 les/c/f=72/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002498 3 0.000119
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=71/72 n=1 ec=47/21 lis/c=71/53 les/c/f=72/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=71/72 n=1 ec=47/21 lis/c=71/53 les/c/f=72/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000026 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=71/72 n=1 ec=47/21 lis/c=71/53 les/c/f=72/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 72 ms_handle_reset con 0x55661220ec00 session 0x55660ffca780
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.a(unlocked)] enter Initial
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=0 pi=[55,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000077 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=0 pi=[55,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000046 1 0.000071
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000101 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000130 1 0.000247
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.a( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=38'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000778 2 0.000062
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.a( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=38'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.a( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=38'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 72 pg[6.a( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=38'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:10.882631+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:39.910197+0000 osd.0 (osd.0) 40 : cluster [DBG] 10.8 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:39.924302+0000 osd.0 (osd.0) 41 : cluster [DBG] 10.8 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 649353 data_alloc: 218103808 data_used: 69632
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 843776 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 41) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:39.910197+0000 osd.0 (osd.0) 40 : cluster [DBG] 10.8 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:39.924302+0000 osd.0 (osd.0) 41 : cluster [DBG] 10.8 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 73 pg[6.a( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=38'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.777339 2 0.000113
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 73 pg[6.a( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=38'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.778373 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 73 pg[6.a( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=38'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 73 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=72/73 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 73 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=72/73 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 73 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=72/73 n=1 ec=47/21 lis/c=72/55 les/c/f=73/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004708 3 0.000169
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 73 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=72/73 n=1 ec=47/21 lis/c=72/55 les/c/f=73/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 73 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=72/73 n=1 ec=47/21 lis/c=72/55 les/c/f=73/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000021 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 73 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=72/73 n=1 ec=47/21 lis/c=72/55 les/c/f=73/56/0 sis=72) [0] r=0 lpr=72 pi=[55,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:11.882833+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:40.914700+0000 osd.0 (osd.0) 42 : cluster [DBG] 5.7 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:40.928794+0000 osd.0 (osd.0) 43 : cluster [DBG] 5.7 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 73 handle_osd_map epochs [73,73], i have 73, src has [1,73]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 720896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 43) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:40.914700+0000 osd.0 (osd.0) 42 : cluster [DBG] 5.7 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:40.928794+0000 osd.0 (osd.0) 43 : cluster [DBG] 5.7 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:12.883104+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fe0ea000/0x0/0x4ffc00000, data 0x67523/0xe3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:13.883286+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fe0ea000/0x0/0x4ffc00000, data 0x67523/0xe3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:14.883464+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:44.823841+0000 osd.0 (osd.0) 44 : cluster [DBG] 2.1d scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:44.837873+0000 osd.0 (osd.0) 45 : cluster [DBG] 2.1d scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 704512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 45) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:44.823841+0000 osd.0 (osd.0) 44 : cluster [DBG] 2.1d scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:44.837873+0000 osd.0 (osd.0) 45 : cluster [DBG] 2.1d scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:15.883668+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 651745 data_alloc: 218103808 data_used: 69632
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:16.883810+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 688128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:17.883950+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 688128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fe0eb000/0x0/0x4ffc00000, data 0x67523/0xe3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 74 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=38'39 mlcod 38'39 active+clean] exit Started/Primary/Active/Clean 30.689918 47 0.000223
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 74 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=38'39 mlcod 38'39 active mbc={255={}}] exit Started/Primary/Active 30.785529 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 74 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=38'39 mlcod 38'39 active mbc={255={}}] exit Started/Primary 31.794911 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 74 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=38'39 mlcod 38'39 active mbc={255={}}] exit Started 31.794947 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 74 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=38'39 mlcod 38'39 active mbc={255={}}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 74 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74 pruub=9.223915100s) [1] r=-1 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 38'39 active pruub 127.256706238s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 74 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74 pruub=9.223128319s) [1] r=-1 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 127.256706238s@ mbc={}] exit Reset 0.000838 1 0.000923
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:18.884076+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 74 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74 pruub=9.223128319s) [1] r=-1 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 127.256706238s@ mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 74 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74 pruub=9.223128319s) [1] r=-1 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 127.256706238s@ mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 74 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74 pruub=9.223128319s) [1] r=-1 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 127.256706238s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 74 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74 pruub=9.223128319s) [1] r=-1 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 127.256706238s@ mbc={}] exit Start 0.000139 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 74 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74 pruub=9.223128319s) [1] r=-1 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 127.256706238s@ mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 74 handle_osd_map epochs [74,74], i have 74, src has [1,74]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.461585045s of 10.672289848s, submitted: 49
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 75 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=-1 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.729748 6 0.000525
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 75 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=-1 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 75 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=-1 lpr=74 pi=[57,74)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 75 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=-1 lpr=74 pi=[57,74)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.013013 3 0.000070
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 75 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=-1 lpr=74 pi=[57,74)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.013060 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 75 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=-1 lpr=74 pi=[57,74)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 75 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=-1 lpr=74 pi=[57,74)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 75 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=-1 lpr=74 pi=[57,74)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000092 1 0.000105
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 75 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=-1 lpr=74 pi=[57,74)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 75 pg[6.b( v 38'39 (0'0,38'39] lb MIN local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=-1 lpr=74 DELETING pi=[57,74)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.011146 2 0.000226
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 75 pg[6.b( v 38'39 (0'0,38'39] lb MIN local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=-1 lpr=74 pi=[57,74)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.011291 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 75 pg[6.b( v 38'39 (0'0,38'39] lb MIN local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=-1 lpr=74 pi=[57,74)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started 0.754368 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:19.884250+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:20.884449+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 660393 data_alloc: 218103808 data_used: 77824
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:21.884614+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:22.884756+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:51.892831+0000 osd.0 (osd.0) 46 : cluster [DBG] 10.4 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:51.906979+0000 osd.0 (osd.0) 47 : cluster [DBG] 10.4 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 76 heartbeat osd_stat(store_statfs(0x4fe0e1000/0x0/0x4ffc00000, data 0x6c986/0xec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 76 handle_osd_map epochs [77,78], i have 76, src has [1,78]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 76 handle_osd_map epochs [77,78], i have 78, src has [1,78]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 78 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=61) [0] r=0 lpr=61 crt=38'39 mlcod 38'39 active+clean] exit Started/Primary/Active/Clean 27.495733 45 0.000232
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 78 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=61) [0] r=0 lpr=61 crt=38'39 mlcod 38'39 active mbc={255={}}] exit Started/Primary/Active 27.710999 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 78 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=61) [0] r=0 lpr=61 crt=38'39 mlcod 38'39 active mbc={255={}}] exit Started/Primary 28.728876 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 78 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=61) [0] r=0 lpr=61 crt=38'39 mlcod 38'39 active mbc={255={}}] exit Started 28.729127 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 78 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=61) [0] r=0 lpr=61 crt=38'39 mlcod 38'39 active mbc={255={}}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 78 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78 pruub=12.302570343s) [1] r=-1 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 38'39 active pruub 134.439956665s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 78 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78 pruub=12.302479744s) [1] r=-1 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 134.439956665s@ mbc={}] exit Reset 0.000139 1 0.000201
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 78 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78 pruub=12.302479744s) [1] r=-1 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 134.439956665s@ mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 78 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78 pruub=12.302479744s) [1] r=-1 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 134.439956665s@ mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 78 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78 pruub=12.302479744s) [1] r=-1 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 134.439956665s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 78 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78 pruub=12.302479744s) [1] r=-1 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 134.439956665s@ mbc={}] exit Start 0.000015 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 78 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78 pruub=12.302479744s) [1] r=-1 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 134.439956665s@ mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 491520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 78 heartbeat osd_stat(store_statfs(0x4fe0e1000/0x0/0x4ffc00000, data 0x6c986/0xec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 47) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:51.892831+0000 osd.0 (osd.0) 46 : cluster [DBG] 10.4 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:51.906979+0000 osd.0 (osd.0) 47 : cluster [DBG] 10.4 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 79 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=-1 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.684902 7 0.000195
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 79 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=-1 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 79 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=-1 lpr=78 pi=[61,78)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 79 handle_osd_map epochs [79,79], i have 79, src has [1,79]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 79 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=-1 lpr=78 pi=[61,78)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.076143 2 0.000057
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 79 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=-1 lpr=78 pi=[61,78)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.076183 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 79 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=-1 lpr=78 pi=[61,78)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 79 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=-1 lpr=78 pi=[61,78)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 79 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=-1 lpr=78 pi=[61,78)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000083 1 0.000099
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 79 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=-1 lpr=78 pi=[61,78)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 79 pg[6.d( v 38'39 (0'0,38'39] lb MIN local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=-1 lpr=78 DELETING pi=[61,78)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.017933 2 0.000180
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 79 pg[6.d( v 38'39 (0'0,38'39] lb MIN local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=-1 lpr=78 pi=[61,78)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.018054 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 79 pg[6.d( v 38'39 (0'0,38'39] lb MIN local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=-1 lpr=78 pi=[61,78)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started 0.779210 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:23.884976+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 442368 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 79 handle_osd_map epochs [79,80], i have 79, src has [1,80]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:24.885228+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 425984 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:25.885398+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 670553 data_alloc: 218103808 data_used: 77824
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 425984 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 80 handle_osd_map epochs [80,81], i have 80, src has [1,81]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 81 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=38'39 mlcod 38'39 active+clean] exit Started/Primary/Active/Clean 37.508408 67 0.000255
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 81 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=38'39 mlcod 38'39 active mbc={255={}}] exit Started/Primary/Active 38.622052 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 81 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=38'39 mlcod 38'39 active mbc={255={}}] exit Started/Primary 39.632482 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 81 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=38'39 mlcod 38'39 active mbc={255={}}] exit Started 39.632525 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 81 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=38'39 mlcod 38'39 active mbc={255={}}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 81 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81 pruub=9.386934280s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 38'39 active pruub 135.256958008s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 81 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81 pruub=9.386527061s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 135.256958008s@ mbc={}] exit Reset 0.000469 1 0.000774
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 81 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81 pruub=9.386527061s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 135.256958008s@ mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 81 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81 pruub=9.386527061s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 135.256958008s@ mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 81 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81 pruub=9.386527061s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 135.256958008s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 81 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81 pruub=9.386527061s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 135.256958008s@ mbc={}] exit Start 0.000047 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 81 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81 pruub=9.386527061s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 135.256958008s@ mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 81 handle_osd_map epochs [81,81], i have 81, src has [1,81]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:26.885535+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.1c deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.1c deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 393216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 82 handle_osd_map epochs [82,82], i have 82, src has [1,82]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 82 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=-1 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007311 7 0.000277
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 82 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=-1 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 82 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=-1 lpr=81 pi=[57,81)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 82 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=-1 lpr=81 pi=[57,81)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.135380 2 0.000150
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 82 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=-1 lpr=81 pi=[57,81)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.135445 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 82 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=-1 lpr=81 pi=[57,81)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 82 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=-1 lpr=81 pi=[57,81)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 82 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=-1 lpr=81 pi=[57,81)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000106 1 0.000094
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 82 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=-1 lpr=81 pi=[57,81)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:27.885639+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:57.061720+0000 osd.0 (osd.0) 48 : cluster [DBG] 2.1c deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:28:57.075807+0000 osd.0 (osd.0) 49 : cluster [DBG] 2.1c deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 82 pg[6.f( v 38'39 (0'0,38'39] lb MIN local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=-1 lpr=81 DELETING pi=[57,81)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.026403 2 0.000269
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 82 pg[6.f( v 38'39 (0'0,38'39] lb MIN local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=-1 lpr=81 pi=[57,81)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.026575 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 82 pg[6.f( v 38'39 (0'0,38'39] lb MIN local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=-1 lpr=81 pi=[57,81)/1 luod=0'0 crt=38'39 mlcod 0'0 active mbc={}] exit Started 1.169555 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 393216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 49) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:57.061720+0000 osd.0 (osd.0) 48 : cluster [DBG] 2.1c deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:28:57.075807+0000 osd.0 (osd.0) 49 : cluster [DBG] 2.1c deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 82 heartbeat osd_stat(store_statfs(0x4fe0d1000/0x0/0x4ffc00000, data 0x76d95/0xfb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:28.885849+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 385024 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:29.885977+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 385024 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.051119804s of 11.178288460s, submitted: 28
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:30.886146+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 679227 data_alloc: 218103808 data_used: 86016
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 335872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 84 heartbeat osd_stat(store_statfs(0x4fcf29000/0x0/0x4ffc00000, data 0x7a48f/0x101000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:31.886270+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 311296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 84 heartbeat osd_stat(store_statfs(0x4fcf29000/0x0/0x4ffc00000, data 0x7a48f/0x101000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:32.886415+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 311296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:33.886541+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 303104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:34.886743+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 303104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:35.887040+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:05.051202+0000 osd.0 (osd.0) 50 : cluster [DBG] 10.15 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:05.068993+0000 osd.0 (osd.0) 51 : cluster [DBG] 10.15 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.4 deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.4 deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 678627 data_alloc: 218103808 data_used: 86016
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 303104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 84 handle_osd_map epochs [85,86], i have 84, src has [1,86]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 86 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 48.340186 83 0.000368
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 86 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active 48.344818 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 86 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary 49.358865 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 86 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] exit Started 49.358975 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 86 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 86 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86 pruub=15.660663605s) [2] r=-1 lpr=86 pi=[57,86)/1 crt=41'581 mlcod 0'0 active pruub 151.252334595s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 86 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86 pruub=15.660349846s) [2] r=-1 lpr=86 pi=[57,86)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 151.252334595s@ mbc={}] exit Reset 0.000400 1 0.000621
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 86 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86 pruub=15.660349846s) [2] r=-1 lpr=86 pi=[57,86)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 151.252334595s@ mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 86 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86 pruub=15.660349846s) [2] r=-1 lpr=86 pi=[57,86)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 151.252334595s@ mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 86 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86 pruub=15.660349846s) [2] r=-1 lpr=86 pi=[57,86)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 151.252334595s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 86 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86 pruub=15.660349846s) [2] r=-1 lpr=86 pi=[57,86)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 151.252334595s@ mbc={}] exit Start 0.000109 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 86 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86 pruub=15.660349846s) [2] r=-1 lpr=86 pi=[57,86)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 151.252334595s@ mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 87 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=-1 lpr=86 pi=[57,86)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.440530 3 0.000382
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 87 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=-1 lpr=86 pi=[57,86)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.441100 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 87 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=86) [2] r=-1 lpr=86 pi=[57,86)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 87 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 87 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Reset 0.000618 1 0.001017
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 87 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 87 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 87 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 87 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Start 0.000027 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 87 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 87 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 87 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 51) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:05.051202+0000 osd.0 (osd.0) 50 : cluster [DBG] 10.15 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:05.068993+0000 osd.0 (osd.0) 51 : cluster [DBG] 10.15 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 87 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002899 2 0.000120
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 87 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 87 handle_osd_map epochs [87,87], i have 87, src has [1,87]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:36.887272+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 87 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000047 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 87 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 87 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 87 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:06.023847+0000 osd.0 (osd.0) 52 : cluster [DBG] 5.4 deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:06.037895+0000 osd.0 (osd.0) 53 : cluster [DBG] 5.4 deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 88 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997458 3 0.000245
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 88 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000615 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 88 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 88 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 88 handle_osd_map epochs [88,88], i have 88, src has [1,88]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:37.887542+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 88 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 88 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.004712 5 0.000425
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 88 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 88 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000095 1 0.000083
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 88 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 88 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000290 1 0.000035
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 88 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 53) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:06.023847+0000 osd.0 (osd.0) 52 : cluster [DBG] 5.4 deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:06.037895+0000 osd.0 (osd.0) 53 : cluster [DBG] 5.4 deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 88 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.042712 2 0.000069
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 88 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 88 heartbeat osd_stat(store_statfs(0x4fcf1e000/0x0/0x4ffc00000, data 0x8118c/0x10d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:38.887669+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.989248 1 0.000222
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active 1.037347 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary 2.038006 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started 2.038069 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[57,87)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89 pruub=14.967097282s) [2] async=[2] r=-1 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 41'581 active pruub 153.039077759s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89 pruub=14.966977119s) [2] r=-1 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 153.039077759s@ mbc={}] exit Reset 0.000213 1 0.000304
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89 pruub=14.966977119s) [2] r=-1 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 153.039077759s@ mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89 pruub=14.966977119s) [2] r=-1 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 153.039077759s@ mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89 pruub=14.966977119s) [2] r=-1 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 153.039077759s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89 pruub=14.966977119s) [2] r=-1 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 153.039077759s@ mbc={}] exit Start 0.000183 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89 pruub=14.966977119s) [2] r=-1 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 153.039077759s@ mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=56) [0] r=0 lpr=56 crt=41'581 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 51.828128 94 0.000424
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=56) [0] r=0 lpr=56 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active 51.838902 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=56) [0] r=0 lpr=56 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary 52.833646 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=56) [0] r=0 lpr=56 crt=41'581 mlcod 0'0 active mbc={}] exit Started 52.833910 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=56) [0] r=0 lpr=56 crt=41'581 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89 pruub=12.173355103s) [1] r=-1 lpr=89 pi=[56,89)/1 crt=41'581 mlcod 0'0 active pruub 150.246643066s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89 pruub=12.173285484s) [1] r=-1 lpr=89 pi=[56,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 150.246643066s@ mbc={}] exit Reset 0.000118 1 0.000186
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89 pruub=12.173285484s) [1] r=-1 lpr=89 pi=[56,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 150.246643066s@ mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89 pruub=12.173285484s) [1] r=-1 lpr=89 pi=[56,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 150.246643066s@ mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89 pruub=12.173285484s) [1] r=-1 lpr=89 pi=[56,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 150.246643066s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89 pruub=12.173285484s) [1] r=-1 lpr=89 pi=[56,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 150.246643066s@ mbc={}] exit Start 0.000023 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 89 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89 pruub=12.173285484s) [1] r=-1 lpr=89 pi=[56,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 150.246643066s@ mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 89 handle_osd_map epochs [89,89], i have 89, src has [1,89]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 89 handle_osd_map epochs [89,89], i have 89, src has [1,89]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 1253376 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=-1 lpr=89 pi=[56,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.889922 3 0.000166
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=-1 lpr=89 pi=[56,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.890049 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [1] r=-1 lpr=89 pi=[56,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Reset 0.000264 1 0.000377
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Start 0.000137 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 90 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.005580 2 0.000406
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000052 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=-1 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.914439 7 0.000462
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=-1 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=-1 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=-1 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000133 1 0.000073
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.13( v 41'581 (0'0,41'581] local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=-1 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.13( v 41'581 (0'0,41'581] lb MIN local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=-1 lpr=89 DELETING pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.041568 2 0.000341
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.13( v 41'581 (0'0,41'581] lb MIN local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=-1 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.041778 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 90 pg[9.13( v 41'581 (0'0,41'581] lb MIN local-lis/les=87/88 n=6 ec=49/34 lis/c=87/57 les/c/f=88/58/0 sis=89) [2] r=-1 lpr=89 pi=[57,89)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.956518 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:39.887830+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 1245184 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.902430534s of 10.028028488s, submitted: 27
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 90 handle_osd_map epochs [90,91], i have 91, src has [1,91]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997288 3 0.000209
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003039 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:40.887971+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.16(unlocked)] enter Initial
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=0 lpr=0 pi=[67,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000076 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=0 lpr=0 pi=[67,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=0 lpr=91 pi=[67,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000025 1 0.000040
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=0 lpr=91 pi=[67,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=0 lpr=91 pi=[67,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=0 lpr=91 pi=[67,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=0 lpr=91 pi=[67,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=0 lpr=91 pi=[67,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=0 lpr=91 pi=[67,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=0 lpr=91 pi=[67,91)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=0 lpr=91 pi=[67,91)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000121 1 0.000055
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=0 lpr=91 pi=[67,91)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=0 lpr=91 pi=[67,91)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000025 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=0 lpr=91 pi=[67,91)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000169 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=0 lpr=91 pi=[67,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 91 handle_osd_map epochs [91,91], i have 91, src has [1,91]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.326277 5 0.000336
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000463 1 0.000095
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000835 1 0.000090
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.043934 2 0.000138
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 91 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695565 data_alloc: 218103808 data_used: 98304
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 1220608 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:41.888201+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:11.077240+0000 osd.0 (osd.0) 54 : cluster [DBG] 2.f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:11.091372+0000 osd.0 (osd.0) 55 : cluster [DBG] 2.f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 91 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 55) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:11.077240+0000 osd.0 (osd.0) 54 : cluster [DBG] 2.f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:11.091372+0000 osd.0 (osd.0) 55 : cluster [DBG] 2.f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=0 lpr=91 pi=[67,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.908067 2 0.000060
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=0 lpr=91 pi=[67,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.908278 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=0 lpr=91 pi=[67,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.908308 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=91) [0] r=0 lpr=91 pi=[67,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000112 1 0.000172
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000012 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.758902 1 0.000102
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active 1.130833 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary 2.133988 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started 2.134243 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[56,90)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92 pruub=15.195237160s) [1] async=[1] r=-1 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 41'581 active pruub 156.293350220s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92 pruub=15.194863319s) [1] r=-1 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 156.293350220s@ mbc={}] exit Reset 0.000445 1 0.000657
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92 pruub=15.194863319s) [1] r=-1 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 156.293350220s@ mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92 pruub=15.194863319s) [1] r=-1 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 156.293350220s@ mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92 pruub=15.194863319s) [1] r=-1 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 156.293350220s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92 pruub=15.194863319s) [1] r=-1 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 156.293350220s@ mbc={}] exit Start 0.000281 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 92 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92 pruub=15.194863319s) [1] r=-1 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 156.293350220s@ mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 92 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 1171456 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 92 heartbeat osd_stat(store_statfs(0x4fcf15000/0x0/0x4ffc00000, data 0x87d3d/0x118000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:42.888453+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.7 deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 93 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=-1 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.144721 6 0.000563
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 93 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=-1 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 93 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=-1 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 93 pg[9.16( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.147285 5 0.000074
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 93 pg[9.16( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 93 pg[9.16( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 93 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=-1 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000134 1 0.000065
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 93 pg[9.15( v 41'581 (0'0,41'581] local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=-1 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: not registered w/ OSD
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: not registered w/ OSD
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 93 pg[9.16( v 41'581 lc 40'103 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.008884 4 0.000144
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 93 pg[9.16( v 41'581 lc 40'103 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 93 pg[9.16( v 41'581 lc 40'103 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000042 1 0.000040
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 93 pg[9.16( v 41'581 lc 40'103 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.7 deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 93 pg[9.15( v 41'581 (0'0,41'581] lb MIN local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=-1 lpr=92 DELETING pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.057653 3 0.000225
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 93 pg[9.15( v 41'581 (0'0,41'581] lb MIN local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=-1 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.057842 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 93 pg[9.15( v 41'581 (0'0,41'581] lb MIN local-lis/les=90/91 n=6 ec=49/34 lis/c=90/56 les/c/f=91/57/0 sis=92) [1] r=-1 lpr=92 pi=[56,92)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.202969 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: not registered w/ OSD
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 93 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.093747 1 0.000026
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 93 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf15000/0x0/0x4ffc00000, data 0x87d3d/0x118000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 1286144 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:43.888587+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:13.089546+0000 osd.0 (osd.0) 56 : cluster [DBG] 10.7 deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:13.103588+0000 osd.0 (osd.0) 57 : cluster [DBG] 10.7 deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.781122 1 0.000084
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.884017 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started 2.031373 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[67,92)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Reset 0.000511 1 0.000676
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Start 0.000126 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 57) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:13.089546+0000 osd.0 (osd.0) 56 : cluster [DBG] 10.7 deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:13.103588+0000 osd.0 (osd.0) 57 : cluster [DBG] 10.7 deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 94 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004072 2 0.000457
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:16 compute-0 ceph-osd[87458]: merge_log_dups log.dups.size()=0olog.dups.size()=21
Oct 11 04:55:16 compute-0 ceph-osd[87458]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=21
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001178 2 0.000093
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 94 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 1261568 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:44.888778+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 94 heartbeat osd_stat(store_statfs(0x4fcf0d000/0x0/0x4ffc00000, data 0x8b1d9/0x11e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 94 handle_osd_map epochs [94,95], i have 95, src has [1,95]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 95 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004982 2 0.000052
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 95 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010347 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 95 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=92/93 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 95 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=94/95 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 95 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=94/95 n=6 ec=49/34 lis/c=92/67 les/c/f=93/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 95 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=94/95 n=6 ec=49/34 lis/c=94/67 les/c/f=95/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013896 3 0.000420
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 95 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=94/95 n=6 ec=49/34 lis/c=94/67 les/c/f=95/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 95 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=94/95 n=6 ec=49/34 lis/c=94/67 les/c/f=95/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000020 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 95 pg[9.16( v 41'581 (0'0,41'581] local-lis/les=94/95 n=6 ec=49/34 lis/c=94/67 les/c/f=95/68/0 sis=94) [0] r=0 lpr=94 pi=[67,94)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 1171456 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:45.888934+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 707225 data_alloc: 218103808 data_used: 94208
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 1171456 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:46.889083+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 1146880 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 95 heartbeat osd_stat(store_statfs(0x4fcf0c000/0x0/0x4ffc00000, data 0x8cc0c/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:47.889223+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 95 heartbeat osd_stat(store_statfs(0x4fcf0c000/0x0/0x4ffc00000, data 0x8cc0c/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 1146880 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:48.889412+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:18.100700+0000 osd.0 (osd.0) 58 : cluster [DBG] 5.5 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:18.114847+0000 osd.0 (osd.0) 59 : cluster [DBG] 5.5 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 59) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:18.100700+0000 osd.0 (osd.0) 58 : cluster [DBG] 5.5 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:18.114847+0000 osd.0 (osd.0) 59 : cluster [DBG] 5.5 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 1138688 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:49.889618+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 1138688 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:50.889792+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.074181557s of 10.221228600s, submitted: 93
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 715784 data_alloc: 218103808 data_used: 102400
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 1122304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:51.889944+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:21.127553+0000 osd.0 (osd.0) 60 : cluster [DBG] 2.1f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:21.141172+0000 osd.0 (osd.0) 61 : cluster [DBG] 2.1f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 61) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:21.127553+0000 osd.0 (osd.0) 60 : cluster [DBG] 2.1f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:21.141172+0000 osd.0 (osd.0) 61 : cluster [DBG] 2.1f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 1114112 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:52.890176+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 97 heartbeat osd_stat(store_statfs(0x4fcf06000/0x0/0x4ffc00000, data 0x90306/0x127000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 98 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 64.981704 120 0.000578
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 98 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active 64.990804 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 98 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary 66.004638 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 98 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] exit Started 66.004864 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 98 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=41'581 mlcod 0'0 active mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 98 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98 pruub=15.018722534s) [2] r=-1 lpr=98 pi=[57,98)/1 crt=41'581 mlcod 0'0 active pruub 167.257934570s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 98 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98 pruub=15.018464088s) [2] r=-1 lpr=98 pi=[57,98)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 167.257934570s@ mbc={}] exit Reset 0.000331 1 0.000720
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 98 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98 pruub=15.018464088s) [2] r=-1 lpr=98 pi=[57,98)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 167.257934570s@ mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 98 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98 pruub=15.018464088s) [2] r=-1 lpr=98 pi=[57,98)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 167.257934570s@ mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 98 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98 pruub=15.018464088s) [2] r=-1 lpr=98 pi=[57,98)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 167.257934570s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 98 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98 pruub=15.018464088s) [2] r=-1 lpr=98 pi=[57,98)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 167.257934570s@ mbc={}] exit Start 0.000097 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 98 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98 pruub=15.018464088s) [2] r=-1 lpr=98 pi=[57,98)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 167.257934570s@ mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 98 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 99 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=-1 lpr=98 pi=[57,98)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.124323 3 0.000307
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 99 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=-1 lpr=98 pi=[57,98)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.124521 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 99 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=98) [2] r=-1 lpr=98 pi=[57,98)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 99 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 99 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Reset 0.000103 1 0.000144
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 99 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 99 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 99 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 99 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped mbc={}] exit Start 0.000012 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 99 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 99 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 99 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 99 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003187 2 0.000066
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 99 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 99 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000043 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 99 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 99 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 99 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000019 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 99 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 49152 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:53.890379+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:23.123796+0000 osd.0 (osd.0) 62 : cluster [DBG] 5.2 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:23.137927+0000 osd.0 (osd.0) 63 : cluster [DBG] 5.2 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.17 deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.17 deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 99 handle_osd_map epochs [99,100], i have 100, src has [1,100]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 100 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005397 3 0.000138
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 100 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.008765 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 100 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 100 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 activating+remapped mbc={255={(0+1)=11}}] enter Started/Primary/Active/Activating
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 63) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:23.123796+0000 osd.0 (osd.0) 62 : cluster [DBG] 5.2 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:23.137927+0000 osd.0 (osd.0) 63 : cluster [DBG] 5.2 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 100 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 100 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] exit Started/Primary/Active/Activating 0.370202 5 0.000441
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 100 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 100 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=11}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000104 1 0.000034
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 100 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=11}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 100 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=11}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.001020 1 0.000144
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 100 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=11}}] enter Started/Primary/Active/Recovering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 100 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.093551 2 0.000033
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 100 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 100 heartbeat osd_stat(store_statfs(0x4fcefe000/0x0/0x4ffc00000, data 0x938e8/0x12d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.132056 1 0.000115
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active 0.597315 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary 1.606107 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started 1.606143 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[57,99)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101 pruub=15.772900581s) [2] async=[2] r=-1 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 41'581 active pruub 169.743255615s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101 pruub=15.772796631s) [2] r=-1 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 169.743255615s@ mbc={}] exit Reset 0.000153 1 0.000214
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101 pruub=15.772796631s) [2] r=-1 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 169.743255615s@ mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101 pruub=15.772796631s) [2] r=-1 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 169.743255615s@ mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101 pruub=15.772796631s) [2] r=-1 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 169.743255615s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101 pruub=15.772796631s) [2] r=-1 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 169.743255615s@ mbc={}] exit Start 0.000016 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 101 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101 pruub=15.772796631s) [2] r=-1 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 169.743255615s@ mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 101 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:54.890593+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:24.122696+0000 osd.0 (osd.0) 64 : cluster [DBG] 10.17 deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:24.136776+0000 osd.0 (osd.0) 65 : cluster [DBG] 10.17 deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 65) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:24.122696+0000 osd.0 (osd.0) 64 : cluster [DBG] 10.17 deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:24.136776+0000 osd.0 (osd.0) 65 : cluster [DBG] 10.17 deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 102 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=-1 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017691 7 0.000132
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 102 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=-1 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 102 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=-1 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 102 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=-1 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000086 1 0.000045
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 102 pg[9.19( v 41'581 (0'0,41'581] local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=-1 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:55.890786+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 102 pg[9.19( v 41'581 (0'0,41'581] lb MIN local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=-1 lpr=101 DELETING pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.084776 2 0.000213
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 102 pg[9.19( v 41'581 (0'0,41'581] lb MIN local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=-1 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.084924 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 102 pg[9.19( v 41'581 (0'0,41'581] lb MIN local-lis/les=99/100 n=6 ec=49/34 lis/c=99/57 les/c/f=100/58/0 sis=101) [2] r=-1 lpr=101 pi=[57,101)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.102671 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.d scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.d scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 717755 data_alloc: 218103808 data_used: 110592
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 8192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55661220e800
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:56.905318+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:26.162166+0000 osd.0 (osd.0) 66 : cluster [DBG] 10.d scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:26.179848+0000 osd.0 (osd.0) 67 : cluster [DBG] 10.d scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 102 heartbeat osd_stat(store_statfs(0x4fcef6000/0x0/0x4ffc00000, data 0x98893/0x135000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 67) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:26.162166+0000 osd.0 (osd.0) 66 : cluster [DBG] 10.d scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:26.179848+0000 osd.0 (osd.0) 67 : cluster [DBG] 10.d scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:57.905522+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:27.158749+0000 osd.0 (osd.0) 68 : cluster [DBG] 5.3 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:27.172933+0000 osd.0 (osd.0) 69 : cluster [DBG] 5.3 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.b scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.b scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 103 heartbeat osd_stat(store_statfs(0x4fcef5000/0x0/0x4ffc00000, data 0x9a410/0x138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1007616 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 69) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:27.158749+0000 osd.0 (osd.0) 68 : cluster [DBG] 5.3 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:27.172933+0000 osd.0 (osd.0) 69 : cluster [DBG] 5.3 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:58.905695+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:28.150835+0000 osd.0 (osd.0) 70 : cluster [DBG] 2.b scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:28.165009+0000 osd.0 (osd.0) 71 : cluster [DBG] 2.b scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1007616 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 104 pg[9.1c(unlocked)] enter Initial
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=0 lpr=0 pi=[79,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000133 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=0 lpr=0 pi=[79,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=0 lpr=104 pi=[79,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000048 1 0.000095
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=0 lpr=104 pi=[79,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=0 lpr=104 pi=[79,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=0 lpr=104 pi=[79,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=0 lpr=104 pi=[79,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000174 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=0 lpr=104 pi=[79,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=0 lpr=104 pi=[79,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=0 lpr=104 pi=[79,104)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=0 lpr=104 pi=[79,104)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000214 1 0.000442
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=0 lpr=104 pi=[79,104)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=0 lpr=104 pi=[79,104)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000105 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=0 lpr=104 pi=[79,104)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000490 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=0 lpr=104 pi=[79,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=0 lpr=104 pi=[79,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.071327 2 0.000352
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=0 lpr=104 pi=[79,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.071978 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=0 lpr=104 pi=[79,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.072302 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=104) [0] r=0 lpr=104 pi=[79,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000471 1 0.000612
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000124 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 105 handle_osd_map epochs [105,105], i have 105, src has [1,105]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 71) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:28.150835+0000 osd.0 (osd.0) 70 : cluster [DBG] 2.b scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:28.165009+0000 osd.0 (osd.0) 71 : cluster [DBG] 2.b scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:59.905901+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69197824 unmapped: 999424 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 106 pg[9.1c( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.004045 6 0.000320
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 106 pg[9.1c( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 106 pg[9.1c( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=79/79 les/c/f=80/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 106 pg[9.1c( v 41'581 lc 40'221 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.006140 3 0.000149
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 106 pg[9.1c( v 41'581 lc 40'221 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 106 pg[9.1c( v 41'581 lc 40'221 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000073 1 0.000044
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 106 pg[9.1c( v 41'581 lc 40'221 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 106 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.064274 1 0.000025
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 106 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:00.906074+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.957366943s of 10.183434486s, submitted: 62
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 745369 data_alloc: 218103808 data_used: 110592
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.991599 1 0.000076
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.062250 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started 2.066520 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[79,105)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Reset 0.000282 1 0.000437
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Start 0.000099 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002027 2 0.000312
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 107 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Oct 11 04:55:16 compute-0 ceph-osd[87458]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000643 2 0.000131
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 107 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:01.906244+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:31.214037+0000 osd.0 (osd.0) 72 : cluster [DBG] 2.8 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:31.228132+0000 osd.0 (osd.0) 73 : cluster [DBG] 2.8 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 107 heartbeat osd_stat(store_statfs(0x4fcee5000/0x0/0x4ffc00000, data 0xa1046/0x145000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 107 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1e(unlocked)] enter Initial
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=0 lpr=0 pi=[67,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000106 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=0 lpr=0 pi=[67,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=0 lpr=108 pi=[67,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000036 1 0.000062
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=0 lpr=108 pi=[67,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=0 lpr=108 pi=[67,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=0 lpr=108 pi=[67,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=0 lpr=108 pi=[67,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=0 lpr=108 pi=[67,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=0 lpr=108 pi=[67,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=0 lpr=108 pi=[67,108)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=0 lpr=108 pi=[67,108)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000179 1 0.000071
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=0 lpr=108 pi=[67,108)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=0 lpr=108 pi=[67,108)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000047 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=0 lpr=108 pi=[67,108)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000263 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=0 lpr=108 pi=[67,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007542 2 0.000308
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010608 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=105/106 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=107/108 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:02.906457+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 73) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:31.214037+0000 osd.0 (osd.0) 72 : cluster [DBG] 2.8 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:31.228132+0000 osd.0 (osd.0) 73 : cluster [DBG] 2.8 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=107/108 n=6 ec=49/34 lis/c=105/79 les/c/f=106/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=107/108 n=6 ec=49/34 lis/c=107/79 les/c/f=108/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015078 4 0.000273
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=107/108 n=6 ec=49/34 lis/c=107/79 les/c/f=108/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=107/108 n=6 ec=49/34 lis/c=107/79 les/c/f=108/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000043 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 108 pg[9.1c( v 41'581 (0'0,41'581] local-lis/les=107/108 n=6 ec=49/34 lis/c=107/79 les/c/f=108/80/0 sis=107) [0] r=0 lpr=107 pi=[79,107)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:03.906561+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 108 handle_osd_map epochs [108,109], i have 108, src has [1,109]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 108 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=0 lpr=108 pi=[67,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.014332 2 0.000102
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=0 lpr=108 pi=[67,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.014714 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=0 lpr=108 pi=[67,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.014755 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=108) [0] r=0 lpr=108 pi=[67,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000094 1 0.000222
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000010 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 109 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69337088 unmapped: 1908736 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:04.906786+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 110 pg[9.1e( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.278596 5 0.000104
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 110 pg[9.1e( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 110 pg[9.1e( v 41'581 lc 0'0 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 crt=41'581 mlcod 0'0 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 110 pg[9.1e( v 41'581 lc 40'232 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.005757 4 0.000268
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 110 pg[9.1e( v 41'581 lc 40'232 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 110 pg[9.1e( v 41'581 lc 40'232 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000102 1 0.000051
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 110 pg[9.1e( v 41'581 lc 40'232 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 luod=0'0 crt=41'581 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 110 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.055064 1 0.000051
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 110 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 1966080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:05.906911+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.722868 1 0.000056
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.783921 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] exit Started 2.062578 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[67,109)/1 luod=0'0 crt=41'581 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 luod=0'0 crt=41'581 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Reset 0.000048 1 0.000079
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Start
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000024 1 0.000033
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=0/0 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: merge_log_dups log.dups.size()=0olog.dups.size()=16
Oct 11 04:55:16 compute-0 ceph-osd[87458]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=16
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000880 3 0.000035
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 111 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 111 heartbeat osd_stat(store_statfs(0x4fcacc000/0x0/0x4ffc00000, data 0xa7c50/0x151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 769299 data_alloc: 218103808 data_used: 114688
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69296128 unmapped: 1949696 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:06.907803+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 111 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 111 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.977464 2 0.000052
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.978449 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006218 4 0.000420
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [0] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 1957888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:07.908682+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:37.260627+0000 osd.0 (osd.0) 74 : cluster [DBG] 10.1 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:37.274704+0000 osd.0 (osd.0) 75 : cluster [DBG] 10.1 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 75) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:37.260627+0000 osd.0 (osd.0) 74 : cluster [DBG] 10.1 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:37.274704+0000 osd.0 (osd.0) 75 : cluster [DBG] 10.1 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69296128 unmapped: 1949696 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:08.909239+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 112 handle_osd_map epochs [113,114], i have 112, src has [1,114]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 1916928 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:09.909893+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:39.304641+0000 osd.0 (osd.0) 76 : cluster [DBG] 10.1e scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:39.318544+0000 osd.0 (osd.0) 77 : cluster [DBG] 10.1e scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 77) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:39.304641+0000 osd.0 (osd.0) 76 : cluster [DBG] 10.1e scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:39.318544+0000 osd.0 (osd.0) 77 : cluster [DBG] 10.1e scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 1916928 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:10.910556+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.911430359s of 10.129508972s, submitted: 46
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782194 data_alloc: 218103808 data_used: 114688
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69337088 unmapped: 1908736 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac3000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:11.911064+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:41.343923+0000 osd.0 (osd.0) 78 : cluster [DBG] 2.16 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:41.357988+0000 osd.0 (osd.0) 79 : cluster [DBG] 2.16 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 79) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:41.343923+0000 osd.0 (osd.0) 78 : cluster [DBG] 2.16 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:41.357988+0000 osd.0 (osd.0) 79 : cluster [DBG] 2.16 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 1892352 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:12.911319+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 1892352 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:13.911484+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:43.376845+0000 osd.0 (osd.0) 80 : cluster [DBG] 5.15 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:43.390707+0000 osd.0 (osd.0) 81 : cluster [DBG] 5.15 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 81) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:43.376845+0000 osd.0 (osd.0) 80 : cluster [DBG] 5.15 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:43.390707+0000 osd.0 (osd.0) 81 : cluster [DBG] 5.15 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 1884160 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:14.911842+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 1884160 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:15.912121+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:45.321651+0000 osd.0 (osd.0) 82 : cluster [DBG] 5.14 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:45.336029+0000 osd.0 (osd.0) 83 : cluster [DBG] 5.14 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 83) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:45.321651+0000 osd.0 (osd.0) 82 : cluster [DBG] 5.14 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:45.336029+0000 osd.0 (osd.0) 83 : cluster [DBG] 5.14 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.13 deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.13 deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784390 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 1875968 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:16.912358+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:46.367136+0000 osd.0 (osd.0) 84 : cluster [DBG] 2.13 deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:46.381471+0000 osd.0 (osd.0) 85 : cluster [DBG] 2.13 deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 85) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:46.367136+0000 osd.0 (osd.0) 84 : cluster [DBG] 2.13 deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:46.381471+0000 osd.0 (osd.0) 85 : cluster [DBG] 2.13 deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 1875968 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:17.912586+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 1875968 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:18.912796+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:48.307172+0000 osd.0 (osd.0) 86 : cluster [DBG] 10.16 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:48.321246+0000 osd.0 (osd.0) 87 : cluster [DBG] 10.16 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 87) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:48.307172+0000 osd.0 (osd.0) 86 : cluster [DBG] 10.16 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:48.321246+0000 osd.0 (osd.0) 87 : cluster [DBG] 10.16 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 1867776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:19.913044+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 1867776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:20.913191+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 785539 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69402624 unmapped: 1843200 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:21.913345+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.807960510s of 10.886000633s, submitted: 10
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69410816 unmapped: 1835008 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:22.913510+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:52.229910+0000 osd.0 (osd.0) 88 : cluster [DBG] 2.11 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:52.244009+0000 osd.0 (osd.0) 89 : cluster [DBG] 2.11 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 89) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:52.229910+0000 osd.0 (osd.0) 88 : cluster [DBG] 2.11 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:52.244009+0000 osd.0 (osd.0) 89 : cluster [DBG] 2.11 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.e deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.e deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 1810432 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:23.913781+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:53.260699+0000 osd.0 (osd.0) 90 : cluster [DBG] 10.e deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:53.278380+0000 osd.0 (osd.0) 91 : cluster [DBG] 10.e deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 91) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:53.260699+0000 osd.0 (osd.0) 90 : cluster [DBG] 10.e deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:53.278380+0000 osd.0 (osd.0) 91 : cluster [DBG] 10.e deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 1794048 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:24.914233+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:54.278144+0000 osd.0 (osd.0) 92 : cluster [DBG] 3.1f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:54.292289+0000 osd.0 (osd.0) 93 : cluster [DBG] 3.1f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 93) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:54.278144+0000 osd.0 (osd.0) 92 : cluster [DBG] 3.1f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:54.292289+0000 osd.0 (osd.0) 93 : cluster [DBG] 3.1f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 1785856 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:25.914429+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.1b deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.1b deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 790131 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 1736704 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:26.914586+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:56.253689+0000 osd.0 (osd.0) 94 : cluster [DBG] 7.1b deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:56.267757+0000 osd.0 (osd.0) 95 : cluster [DBG] 7.1b deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 95) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:56.253689+0000 osd.0 (osd.0) 94 : cluster [DBG] 7.1b deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:56.267757+0000 osd.0 (osd.0) 95 : cluster [DBG] 7.1b deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69492736 unmapped: 1753088 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:27.914813+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 1744896 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:28.914971+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:58.211814+0000 osd.0 (osd.0) 96 : cluster [DBG] 11.17 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:58.225893+0000 osd.0 (osd.0) 97 : cluster [DBG] 11.17 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 97) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:58.211814+0000 osd.0 (osd.0) 96 : cluster [DBG] 11.17 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:58.225893+0000 osd.0 (osd.0) 97 : cluster [DBG] 11.17 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 1744896 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:29.915247+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 1736704 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:30.915414+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 791280 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 1736704 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:31.915596+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 1728512 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:32.915811+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 1728512 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:33.915997+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.14 deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.908695221s of 11.969345093s, submitted: 10
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.14 deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 1720320 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:34.916188+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:04.199203+0000 osd.0 (osd.0) 98 : cluster [DBG] 11.14 deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:04.213270+0000 osd.0 (osd.0) 99 : cluster [DBG] 11.14 deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.18 deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.18 deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 99) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:04.199203+0000 osd.0 (osd.0) 98 : cluster [DBG] 11.14 deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:04.213270+0000 osd.0 (osd.0) 99 : cluster [DBG] 11.14 deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 1712128 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:35.929701+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:05.216111+0000 osd.0 (osd.0) 100 : cluster [DBG] 7.18 deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:05.230182+0000 osd.0 (osd.0) 101 : cluster [DBG] 7.18 deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 101) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:05.216111+0000 osd.0 (osd.0) 100 : cluster [DBG] 7.18 deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:05.230182+0000 osd.0 (osd.0) 101 : cluster [DBG] 7.18 deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 793577 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 1712128 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:36.929893+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 1703936 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:37.930043+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 1703936 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:38.930172+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 1687552 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:39.930315+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:09.198404+0000 osd.0 (osd.0) 102 : cluster [DBG] 3.1b scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:09.212523+0000 osd.0 (osd.0) 103 : cluster [DBG] 3.1b scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 103) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:09.198404+0000 osd.0 (osd.0) 102 : cluster [DBG] 3.1b scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:09.212523+0000 osd.0 (osd.0) 103 : cluster [DBG] 3.1b scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69566464 unmapped: 1679360 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:40.931180+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:10.238747+0000 osd.0 (osd.0) 104 : cluster [DBG] 7.1f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:10.252864+0000 osd.0 (osd.0) 105 : cluster [DBG] 7.1f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 105) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:10.238747+0000 osd.0 (osd.0) 104 : cluster [DBG] 7.1f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:10.252864+0000 osd.0 (osd.0) 105 : cluster [DBG] 7.1f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 795873 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1662976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:41.931431+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1654784 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:42.931684+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1654784 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:43.931859+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 1646592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:44.932013+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.077273369s of 11.106318474s, submitted: 8
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 1646592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:45.932153+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:15.305564+0000 osd.0 (osd.0) 106 : cluster [DBG] 8.10 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:15.319626+0000 osd.0 (osd.0) 107 : cluster [DBG] 8.10 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 107) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:15.305564+0000 osd.0 (osd.0) 106 : cluster [DBG] 8.10 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:15.319626+0000 osd.0 (osd.0) 107 : cluster [DBG] 8.10 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 797021 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 1638400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:46.932373+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 1630208 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:47.932496+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1622016 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:48.932637+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1613824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:49.932757+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1613824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:50.932876+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 797021 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1613824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:51.933019+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1589248 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:52.933161+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:22.367465+0000 osd.0 (osd.0) 108 : cluster [DBG] 8.14 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:22.381712+0000 osd.0 (osd.0) 109 : cluster [DBG] 8.14 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 109) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:22.367465+0000 osd.0 (osd.0) 108 : cluster [DBG] 8.14 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:22.381712+0000 osd.0 (osd.0) 109 : cluster [DBG] 8.14 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 1581056 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:53.933353+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69672960 unmapped: 1572864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:54.933531+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:24.387182+0000 osd.0 (osd.0) 110 : cluster [DBG] 11.10 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:24.401308+0000 osd.0 (osd.0) 111 : cluster [DBG] 11.10 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 111) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:24.387182+0000 osd.0 (osd.0) 110 : cluster [DBG] 11.10 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:24.401308+0000 osd.0 (osd.0) 111 : cluster [DBG] 11.10 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69672960 unmapped: 1572864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:55.933696+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 799318 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 1548288 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:56.933837+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 1556480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:57.934030+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.037799835s of 13.070416451s, submitted: 6
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 1556480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:58.934252+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:28.376007+0000 osd.0 (osd.0) 112 : cluster [DBG] 7.3 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:28.390101+0000 osd.0 (osd.0) 113 : cluster [DBG] 7.3 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 113) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:28.376007+0000 osd.0 (osd.0) 112 : cluster [DBG] 7.3 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:28.390101+0000 osd.0 (osd.0) 113 : cluster [DBG] 7.3 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 1548288 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:59.934544+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 1548288 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:00.934677+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 801613 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 1523712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:01.934762+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:31.385159+0000 osd.0 (osd.0) 114 : cluster [DBG] 11.f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:31.399354+0000 osd.0 (osd.0) 115 : cluster [DBG] 11.f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 115) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:31.385159+0000 osd.0 (osd.0) 114 : cluster [DBG] 11.f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:31.399354+0000 osd.0 (osd.0) 115 : cluster [DBG] 11.f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 1523712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:02.934926+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 1515520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:03.935115+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.c scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.c scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 1515520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:04.935326+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:34.453083+0000 osd.0 (osd.0) 116 : cluster [DBG] 8.c scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:34.467174+0000 osd.0 (osd.0) 117 : cluster [DBG] 8.c scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 117) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:34.453083+0000 osd.0 (osd.0) 116 : cluster [DBG] 8.c scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:34.467174+0000 osd.0 (osd.0) 117 : cluster [DBG] 8.c scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 1515520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:05.935601+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.f deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.f deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 803907 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 1499136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:06.935728+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:36.464690+0000 osd.0 (osd.0) 118 : cluster [DBG] 7.f deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:36.478775+0000 osd.0 (osd.0) 119 : cluster [DBG] 7.f deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 119) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:36.464690+0000 osd.0 (osd.0) 118 : cluster [DBG] 7.f deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:36.478775+0000 osd.0 (osd.0) 119 : cluster [DBG] 7.f deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 1499136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:07.935923+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 1490944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:08.936039+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.068527222s of 11.096608162s, submitted: 8
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1482752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:09.936194+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:39.472666+0000 osd.0 (osd.0) 120 : cluster [DBG] 8.f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:39.493829+0000 osd.0 (osd.0) 121 : cluster [DBG] 8.f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 121) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:39.472666+0000 osd.0 (osd.0) 120 : cluster [DBG] 8.f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:39.493829+0000 osd.0 (osd.0) 121 : cluster [DBG] 8.f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1482752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:10.936381+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805054 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1474560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:11.936523+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1466368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:12.936640+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1458176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:13.936790+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1458176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:14.936948+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1458176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:15.937101+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 806201 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 1441792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:16.937261+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:46.535968+0000 osd.0 (osd.0) 122 : cluster [DBG] 3.a scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:46.549970+0000 osd.0 (osd.0) 123 : cluster [DBG] 3.a scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 123) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:46.535968+0000 osd.0 (osd.0) 122 : cluster [DBG] 3.a scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:46.549970+0000 osd.0 (osd.0) 123 : cluster [DBG] 3.a scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 1441792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:17.937437+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1433600 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:18.937591+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1433600 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:19.937740+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1433600 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:20.937942+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 806201 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1409024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:21.938115+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1409024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:22.938281+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1400832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:23.938444+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1400832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:24.938625+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.082592010s of 16.120431900s, submitted: 4
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1392640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:25.938815+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:55.593107+0000 osd.0 (osd.0) 124 : cluster [DBG] 7.4 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:55.607214+0000 osd.0 (osd.0) 125 : cluster [DBG] 7.4 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 125) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:55.593107+0000 osd.0 (osd.0) 124 : cluster [DBG] 7.4 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:55.607214+0000 osd.0 (osd.0) 125 : cluster [DBG] 7.4 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807348 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1392640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:26.939025+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1392640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:27.939163+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1384448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:28.939369+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1376256 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:29.939491+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 1368064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:30.939625+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.e scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.e scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 808496 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1376256 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:31.939928+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:01.552003+0000 osd.0 (osd.0) 126 : cluster [DBG] 11.e scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:01.566133+0000 osd.0 (osd.0) 127 : cluster [DBG] 11.e scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.b scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 127) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:01.552003+0000 osd.0 (osd.0) 126 : cluster [DBG] 11.e scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:01.566133+0000 osd.0 (osd.0) 127 : cluster [DBG] 11.e scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.b scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1376256 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:32.940147+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:02.511383+0000 osd.0 (osd.0) 128 : cluster [DBG] 8.b scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:02.525386+0000 osd.0 (osd.0) 129 : cluster [DBG] 8.b scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 129) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:02.511383+0000 osd.0 (osd.0) 128 : cluster [DBG] 8.b scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:02.525386+0000 osd.0 (osd.0) 129 : cluster [DBG] 8.b scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 1368064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:33.940307+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 1368064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:34.940474+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 1359872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:35.940624+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:05.487609+0000 osd.0 (osd.0) 130 : cluster [DBG] 7.6 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:05.501694+0000 osd.0 (osd.0) 131 : cluster [DBG] 7.6 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 131) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:05.487609+0000 osd.0 (osd.0) 130 : cluster [DBG] 7.6 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:05.501694+0000 osd.0 (osd.0) 131 : cluster [DBG] 7.6 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810790 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 1359872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:36.940885+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 1351680 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:37.941043+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 1351680 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:38.941203+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 1351680 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:39.941422+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.951458931s of 14.992031097s, submitted: 8
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 1343488 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:40.941579+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:10.585225+0000 osd.0 (osd.0) 132 : cluster [DBG] 8.9 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:10.599380+0000 osd.0 (osd.0) 133 : cluster [DBG] 8.9 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 133) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:10.585225+0000 osd.0 (osd.0) 132 : cluster [DBG] 8.9 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:10.599380+0000 osd.0 (osd.0) 133 : cluster [DBG] 8.9 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.e scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.e scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 813084 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1474560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:41.941757+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:11.628914+0000 osd.0 (osd.0) 134 : cluster [DBG] 8.e scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:11.643028+0000 osd.0 (osd.0) 135 : cluster [DBG] 8.e scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 135) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:11.628914+0000 osd.0 (osd.0) 134 : cluster [DBG] 8.e scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:11.643028+0000 osd.0 (osd.0) 135 : cluster [DBG] 8.e scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1474560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:42.941955+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1474560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:43.942971+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1474560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:44.943830+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:14.693587+0000 osd.0 (osd.0) 136 : cluster [DBG] 11.1 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:14.707563+0000 osd.0 (osd.0) 137 : cluster [DBG] 11.1 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 137) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:14.693587+0000 osd.0 (osd.0) 136 : cluster [DBG] 11.1 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:14.707563+0000 osd.0 (osd.0) 137 : cluster [DBG] 11.1 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1466368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:45.944680+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 814232 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1466368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:46.944972+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1466368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:47.945452+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1458176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:48.945880+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:18.647418+0000 osd.0 (osd.0) 138 : cluster [DBG] 11.4 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:18.665033+0000 osd.0 (osd.0) 139 : cluster [DBG] 11.4 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 139) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:18.647418+0000 osd.0 (osd.0) 138 : cluster [DBG] 11.4 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:18.665033+0000 osd.0 (osd.0) 139 : cluster [DBG] 11.4 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.c scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.c scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1458176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:49.946179+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:19.618737+0000 osd.0 (osd.0) 140 : cluster [DBG] 3.c scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:19.632829+0000 osd.0 (osd.0) 141 : cluster [DBG] 3.c scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 141) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:19.618737+0000 osd.0 (osd.0) 140 : cluster [DBG] 3.c scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:19.632829+0000 osd.0 (osd.0) 141 : cluster [DBG] 3.c scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.969653130s of 10.008518219s, submitted: 10
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69795840 unmapped: 1449984 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:50.946923+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:20.593658+0000 osd.0 (osd.0) 142 : cluster [DBG] 7.9 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:20.607683+0000 osd.0 (osd.0) 143 : cluster [DBG] 7.9 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 143) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:20.593658+0000 osd.0 (osd.0) 142 : cluster [DBG] 7.9 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:20.607683+0000 osd.0 (osd.0) 143 : cluster [DBG] 7.9 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 818821 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 1441792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:51.947424+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:21.591742+0000 osd.0 (osd.0) 144 : cluster [DBG] 8.6 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:21.605843+0000 osd.0 (osd.0) 145 : cluster [DBG] 8.6 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 145) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:21.591742+0000 osd.0 (osd.0) 144 : cluster [DBG] 8.6 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:21.605843+0000 osd.0 (osd.0) 145 : cluster [DBG] 8.6 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1433600 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:52.947733+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1433600 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:53.947954+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69820416 unmapped: 1425408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:54.948253+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1417216 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:55.948416+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:25.508199+0000 osd.0 (osd.0) 146 : cluster [DBG] 11.6 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:25.522400+0000 osd.0 (osd.0) 147 : cluster [DBG] 11.6 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 147) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:25.508199+0000 osd.0 (osd.0) 146 : cluster [DBG] 11.6 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:25.522400+0000 osd.0 (osd.0) 147 : cluster [DBG] 11.6 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 821116 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1409024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:56.949437+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:26.479455+0000 osd.0 (osd.0) 148 : cluster [DBG] 3.f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:26.493404+0000 osd.0 (osd.0) 149 : cluster [DBG] 3.f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 149) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:26.479455+0000 osd.0 (osd.0) 148 : cluster [DBG] 3.f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:26.493404+0000 osd.0 (osd.0) 149 : cluster [DBG] 3.f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1400832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:57.949627+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1400832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:58.949789+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:59.949957+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1392640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:00.950132+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1392640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 821116 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:01.950254+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1392640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:02.950469+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1384448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.036251068s of 13.068947792s, submitted: 8
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:03.950634+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:33.662724+0000 osd.0 (osd.0) 150 : cluster [DBG] 8.1a scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:33.680385+0000 osd.0 (osd.0) 151 : cluster [DBG] 8.1a scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1384448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 151) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:33.662724+0000 osd.0 (osd.0) 150 : cluster [DBG] 8.1a scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:33.680385+0000 osd.0 (osd.0) 151 : cluster [DBG] 8.1a scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:04.950936+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1376256 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:05.951096+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1376256 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 823412 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:06.951245+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:36.742184+0000 osd.0 (osd.0) 152 : cluster [DBG] 3.12 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:36.756435+0000 osd.0 (osd.0) 153 : cluster [DBG] 3.12 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 1368064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 153) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:36.742184+0000 osd.0 (osd.0) 152 : cluster [DBG] 3.12 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:36.756435+0000 osd.0 (osd.0) 153 : cluster [DBG] 3.12 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:07.951425+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:37.708599+0000 osd.0 (osd.0) 154 : cluster [DBG] 8.1f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:37.722692+0000 osd.0 (osd.0) 155 : cluster [DBG] 8.1f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 1368064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 155) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:37.708599+0000 osd.0 (osd.0) 154 : cluster [DBG] 8.1f scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:37.722692+0000 osd.0 (osd.0) 155 : cluster [DBG] 8.1f scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:08.951617+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 1359872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:09.951816+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:39.762498+0000 osd.0 (osd.0) 156 : cluster [DBG] 3.15 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:39.776459+0000 osd.0 (osd.0) 157 : cluster [DBG] 3.15 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 1359872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 157) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:39.762498+0000 osd.0 (osd.0) 156 : cluster [DBG] 3.15 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:39.776459+0000 osd.0 (osd.0) 157 : cluster [DBG] 3.15 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:10.951973+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:40.741830+0000 osd.0 (osd.0) 158 : cluster [DBG] 11.19 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:40.755862+0000 osd.0 (osd.0) 159 : cluster [DBG] 11.19 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 1359872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 159) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:40.741830+0000 osd.0 (osd.0) 158 : cluster [DBG] 11.19 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:40.755862+0000 osd.0 (osd.0) 159 : cluster [DBG] 11.19 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828004 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:11.952207+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:41.755124+0000 osd.0 (osd.0) 160 : cluster [DBG] 3.9 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:41.769274+0000 osd.0 (osd.0) 161 : cluster [DBG] 3.9 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 1343488 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 161) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:41.755124+0000 osd.0 (osd.0) 160 : cluster [DBG] 3.9 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:41.769274+0000 osd.0 (osd.0) 161 : cluster [DBG] 3.9 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:12.952444+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 1343488 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:13.952665+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 1335296 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.007108688s of 11.065402985s, submitted: 12
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:14.952910+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:44.728265+0000 osd.0 (osd.0) 162 : cluster [DBG] 7.13 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:44.742248+0000 osd.0 (osd.0) 163 : cluster [DBG] 7.13 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 1327104 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 163) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:44.728265+0000 osd.0 (osd.0) 162 : cluster [DBG] 7.13 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:44.742248+0000 osd.0 (osd.0) 163 : cluster [DBG] 7.13 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:15.953133+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:45.727201+0000 osd.0 (osd.0) 164 : cluster [DBG] 3.17 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:45.741165+0000 osd.0 (osd.0) 165 : cluster [DBG] 3.17 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 1327104 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 165) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:45.727201+0000 osd.0 (osd.0) 164 : cluster [DBG] 3.17 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:45.741165+0000 osd.0 (osd.0) 165 : cluster [DBG] 3.17 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 830300 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:16.953387+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1318912 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:17.953570+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1318912 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:18.953710+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:48.680080+0000 osd.0 (osd.0) 166 : cluster [DBG] 8.18 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:48.694189+0000 osd.0 (osd.0) 167 : cluster [DBG] 8.18 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1318912 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 167) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:48.680080+0000 osd.0 (osd.0) 166 : cluster [DBG] 8.18 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:48.694189+0000 osd.0 (osd.0) 167 : cluster [DBG] 8.18 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:19.953904+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 1310720 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:20.954085+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:50.651767+0000 osd.0 (osd.0) 168 : cluster [DBG] 8.1d scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:50.665845+0000 osd.0 (osd.0) 169 : cluster [DBG] 8.1d scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1294336 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 169) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:50.651767+0000 osd.0 (osd.0) 168 : cluster [DBG] 8.1d scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:50.665845+0000 osd.0 (osd.0) 169 : cluster [DBG] 8.1d scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 832596 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:21.954267+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1294336 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:22.954426+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1294336 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:23.954589+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:53.667612+0000 osd.0 (osd.0) 170 : cluster [DBG] 9.1b scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:53.692299+0000 osd.0 (osd.0) 171 : cluster [DBG] 9.1b scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 1286144 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 171) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:53.667612+0000 osd.0 (osd.0) 170 : cluster [DBG] 9.1b scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:53.692299+0000 osd.0 (osd.0) 171 : cluster [DBG] 9.1b scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:24.954831+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 1286144 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:25.954990+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69967872 unmapped: 1277952 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.920932770s of 11.958658218s, submitted: 10
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 834891 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:26.955147+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:56.686998+0000 osd.0 (osd.0) 172 : cluster [DBG] 9.3 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:56.732889+0000 osd.0 (osd.0) 173 : cluster [DBG] 9.3 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69976064 unmapped: 1269760 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 173) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:56.686998+0000 osd.0 (osd.0) 172 : cluster [DBG] 9.3 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:56.732889+0000 osd.0 (osd.0) 173 : cluster [DBG] 9.3 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.d scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.d scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:27.955352+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:57.720617+0000 osd.0 (osd.0) 174 : cluster [DBG] 9.d scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:57.763077+0000 osd.0 (osd.0) 175 : cluster [DBG] 9.d scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69976064 unmapped: 1269760 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 175) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:57.720617+0000 osd.0 (osd.0) 174 : cluster [DBG] 9.d scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:57.763077+0000 osd.0 (osd.0) 175 : cluster [DBG] 9.d scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:28.955572+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 1261568 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:29.955732+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 1261568 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:30.955918+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 1253376 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837185 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:31.956092+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:01.725453+0000 osd.0 (osd.0) 176 : cluster [DBG] 9.5 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:01.764279+0000 osd.0 (osd.0) 177 : cluster [DBG] 9.5 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 1261568 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 177) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:01.725453+0000 osd.0 (osd.0) 176 : cluster [DBG] 9.5 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:01.764279+0000 osd.0 (osd.0) 177 : cluster [DBG] 9.5 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:32.956392+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 1253376 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.b scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.b scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:33.956574+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:03.642088+0000 osd.0 (osd.0) 178 : cluster [DBG] 9.b scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:03.673851+0000 osd.0 (osd.0) 179 : cluster [DBG] 9.b scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 1253376 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 179) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:03.642088+0000 osd.0 (osd.0) 178 : cluster [DBG] 9.b scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:03.673851+0000 osd.0 (osd.0) 179 : cluster [DBG] 9.b scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:34.956733+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 1253376 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:35.956894+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:05.672094+0000 osd.0 (osd.0) 180 : cluster [DBG] 9.11 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:05.707386+0000 osd.0 (osd.0) 181 : cluster [DBG] 9.11 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70008832 unmapped: 1236992 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 181) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:05.672094+0000 osd.0 (osd.0) 180 : cluster [DBG] 9.11 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:05.707386+0000 osd.0 (osd.0) 181 : cluster [DBG] 9.11 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 839480 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:36.957104+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:37.957251+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70008832 unmapped: 1236992 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:38.957451+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.931448936s of 12.969204903s, submitted: 10
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:39.957627+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:09.656146+0000 osd.0 (osd.0) 182 : cluster [DBG] 6.3 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:09.677376+0000 osd.0 (osd.0) 183 : cluster [DBG] 6.3 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70025216 unmapped: 1220608 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 183) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:09.656146+0000 osd.0 (osd.0) 182 : cluster [DBG] 6.3 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:09.677376+0000 osd.0 (osd.0) 183 : cluster [DBG] 6.3 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:40.957805+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841774 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:41.957954+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:11.632244+0000 osd.0 (osd.0) 184 : cluster [DBG] 9.9 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:11.670988+0000 osd.0 (osd.0) 185 : cluster [DBG] 9.9 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 185) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:11.632244+0000 osd.0 (osd.0) 184 : cluster [DBG] 9.9 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:11.670988+0000 osd.0 (osd.0) 185 : cluster [DBG] 9.9 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:42.958276+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:12.662375+0000 osd.0 (osd.0) 186 : cluster [DBG] 6.7 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:12.680030+0000 osd.0 (osd.0) 187 : cluster [DBG] 6.7 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 1204224 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 187) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:12.662375+0000 osd.0 (osd.0) 186 : cluster [DBG] 6.7 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:12.680030+0000 osd.0 (osd.0) 187 : cluster [DBG] 6.7 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:43.958577+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:13.702674+0000 osd.0 (osd.0) 188 : cluster [DBG] 9.1d scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:13.737700+0000 osd.0 (osd.0) 189 : cluster [DBG] 9.1d scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 1196032 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 189) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:13.702674+0000 osd.0 (osd.0) 188 : cluster [DBG] 9.1d scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:13.737700+0000 osd.0 (osd.0) 189 : cluster [DBG] 9.1d scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:44.958836+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1187840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:45.959000+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1187840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 844069 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:46.959158+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1187840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:47.959296+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:48.959515+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:49.959643+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 1171456 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:50.959764+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 1171456 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 844069 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:51.959932+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:52.960076+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:53.960204+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:54.960415+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 1155072 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.010532379s of 16.039787292s, submitted: 8
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:55.960648+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:25.695836+0000 osd.0 (osd.0) 190 : cluster [DBG] 9.1 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:25.738294+0000 osd.0 (osd.0) 191 : cluster [DBG] 9.1 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 1155072 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 191) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:25.695836+0000 osd.0 (osd.0) 190 : cluster [DBG] 9.1 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:25.738294+0000 osd.0 (osd.0) 191 : cluster [DBG] 9.1 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:56.961009+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:26.737775+0000 osd.0 (osd.0) 192 : cluster [DBG] 6.5 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:26.758951+0000 osd.0 (osd.0) 193 : cluster [DBG] 6.5 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 846363 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 193) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:26.737775+0000 osd.0 (osd.0) 192 : cluster [DBG] 6.5 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:26.758951+0000 osd.0 (osd.0) 193 : cluster [DBG] 6.5 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:57.961234+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:58.961373+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:28.742344+0000 osd.0 (osd.0) 194 : cluster [DBG] 6.9 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:28.756372+0000 osd.0 (osd.0) 195 : cluster [DBG] 6.9 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 1122304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 195) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:28.742344+0000 osd.0 (osd.0) 194 : cluster [DBG] 6.9 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:28.756372+0000 osd.0 (osd.0) 195 : cluster [DBG] 6.9 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:59.961630+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 1122304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:00.961773+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:01.961982+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847510 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 1122304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:02.962147+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 1122304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:03.962385+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:04.962557+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.a deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.a deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:05.962719+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:35.681245+0000 osd.0 (osd.0) 196 : cluster [DBG] 6.a deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:35.695355+0000 osd.0 (osd.0) 197 : cluster [DBG] 6.a deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1097728 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 197) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:35.681245+0000 osd.0 (osd.0) 196 : cluster [DBG] 6.a deep-scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:35.695355+0000 osd.0 (osd.0) 197 : cluster [DBG] 6.a deep-scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:06.962907+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848657 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1097728 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.949511528s of 11.978895187s, submitted: 8
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:07.963064+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:37.675314+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.16 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:37.710192+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.16 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 199) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:37.675314+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.16 scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:37.710192+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.16 scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:08.963210+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:38.655554+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.1c scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:38.694373+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.1c scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 201) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:38.655554+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.1c scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:38.694373+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.1c scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:09.963705+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:39.654701+0000 osd.0 (osd.0) 202 : cluster [DBG] 9.1e scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:39.690057+0000 osd.0 (osd.0) 203 : cluster [DBG] 9.1e scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 203) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:39.654701+0000 osd.0 (osd.0) 202 : cluster [DBG] 9.1e scrub starts
Oct 11 04:55:16 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:39.690057+0000 osd.0 (osd.0) 203 : cluster [DBG] 9.1e scrub ok
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:10.963898+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1073152 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:11.964077+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1073152 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:12.964266+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:13.964431+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:14.964646+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:15.964794+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 1056768 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:16.964982+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 1056768 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:17.965141+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:18.965312+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:19.965460+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:20.965645+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:21.965843+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:22.965983+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:23.966112+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:24.966262+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:25.966448+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:26.966583+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:27.966711+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:28.966853+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:29.967065+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:30.967211+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:31.967365+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:32.967577+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:33.967790+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:34.968033+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:35.968240+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:36.968483+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:37.968682+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:38.969547+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:39.969717+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:40.969924+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:41.970134+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:42.970315+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:43.970526+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 925696 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:44.970790+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 925696 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:45.970977+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 925696 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:46.971124+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:47.971271+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:48.971432+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 892928 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:49.971659+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 892928 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:50.971816+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 884736 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:51.972018+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 876544 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:52.972196+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 876544 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:53.972401+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70377472 unmapped: 868352 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:54.972735+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70377472 unmapped: 868352 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:55.972920+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 860160 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:56.973057+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 851968 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:57.973223+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 851968 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:58.973425+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:59.973607+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:00.973788+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 835584 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:01.973961+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 827392 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:02.974148+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 819200 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:03.974412+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 819200 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:04.974587+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 819200 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:05.974748+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 811008 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:06.974904+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 811008 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:07.975046+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 802816 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:08.975168+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 802816 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:09.975311+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 802816 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:10.975471+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 794624 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:11.975615+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 770048 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:12.975738+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 770048 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:13.975897+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 770048 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:14.976080+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 761856 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:15.976243+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 761856 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:16.976408+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 761856 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:17.976518+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70492160 unmapped: 753664 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:18.976705+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70492160 unmapped: 753664 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:19.976882+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70500352 unmapped: 745472 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:20.977018+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70500352 unmapped: 745472 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:21.977187+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 729088 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:22.977366+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 729088 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:23.977493+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 729088 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:24.977675+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 729088 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:25.977827+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 729088 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:26.978008+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 729088 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:27.978158+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70524928 unmapped: 720896 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:28.978282+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70524928 unmapped: 720896 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:29.978398+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 712704 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:30.978542+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 712704 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:31.978698+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70557696 unmapped: 688128 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:32.978861+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70557696 unmapped: 688128 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:33.978987+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70557696 unmapped: 688128 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:34.979747+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 679936 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:35.979889+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 679936 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:36.980036+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 679936 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:37.980205+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70574080 unmapped: 671744 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:38.980353+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70574080 unmapped: 671744 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:39.980482+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70582272 unmapped: 663552 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:40.980614+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70582272 unmapped: 663552 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:41.980763+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 655360 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:42.980929+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 655360 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:43.981111+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 655360 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:44.981424+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 647168 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:45.981575+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 647168 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:46.981719+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:47.981866+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 630784 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:48.982002+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 630784 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:49.982180+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 630784 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:50.982359+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:51.982516+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 614400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:52.982709+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 614400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:53.982845+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 614400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:54.982997+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 606208 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:55.983144+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 606208 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:56.983321+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 598016 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:57.983504+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:58.983774+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:59.983935+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 581632 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:00.984136+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 581632 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:01.984296+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 581632 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:02.984377+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:03.984553+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:04.984744+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:05.984875+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 565248 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:06.985040+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:07.985320+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 540672 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:08.985608+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 540672 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:09.985769+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 540672 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:10.985970+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:11.986182+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:12.986347+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 524288 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:13.986562+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 524288 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:14.986787+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 524288 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:15.986966+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:16.987150+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 499712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:17.987313+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:18.987498+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:19.987670+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:20.987914+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:21.988042+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:22.988169+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 475136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:23.988318+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 475136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:24.988565+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:25.988735+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 450560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:26.988870+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 450560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:27.988985+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:28.989108+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:29.989244+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:30.989402+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:31.989552+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:32.989694+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 425984 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:33.989836+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 425984 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:34.989994+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 425984 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:35.990143+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 417792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:36.990308+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 417792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:37.990428+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 417792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:38.990580+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 409600 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:39.990743+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 409600 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:40.990861+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:41.991045+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 393216 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:42.991224+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 393216 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:43.991393+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:44.991561+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:45.991736+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:46.991874+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:47.992015+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:48.992192+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:49.992380+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 368640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:50.992562+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 368640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:51.992709+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 360448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:52.992845+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 360448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:53.993017+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 360448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:54.993198+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70893568 unmapped: 352256 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:55.993341+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70893568 unmapped: 352256 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:56.993508+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 344064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:57.993692+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 344064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:58.993851+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 344064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:59.994032+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 335872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:00.994220+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 335872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:01.994436+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 327680 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:02.994600+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 327680 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:03.994733+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 327680 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:04.994894+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 319488 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:05.995031+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 319488 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:06.995233+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 319488 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:07.995374+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 311296 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:08.995627+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 311296 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:09.995797+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 303104 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:10.995957+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 303104 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:11.996108+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 303104 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:12.996254+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 294912 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:13.996414+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 294912 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:14.996574+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 294912 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:15.996747+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 286720 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:16.996923+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 286720 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:17.997048+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 278528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:18.997189+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 278528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:19.997421+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 278528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:20.997595+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 278528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:21.997925+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 278528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:22.998198+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 278528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:23.998489+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 270336 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:24.998738+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 270336 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:25.998935+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 262144 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:26.999129+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 262144 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:27.999286+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 262144 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:28.999484+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 253952 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:29.999625+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 253952 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:30.999789+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 245760 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:32.000069+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 245760 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:33.000477+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 237568 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:34.000723+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 237568 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:35.000956+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 237568 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:36.001143+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 229376 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:37.001367+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 229376 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:38.001488+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 229376 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:39.001636+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 221184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:40.001806+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 221184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:41.001970+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 221184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:42.002095+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 212992 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:43.002273+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 212992 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:44.002373+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 204800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:45.002521+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 204800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:46.002622+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 196608 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:47.002794+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 196608 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:48.002971+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 196608 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:49.003102+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 188416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:50.003234+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 188416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:51.003401+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 172032 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:52.003556+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 172032 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:53.003757+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 163840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:54.005700+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 163840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:55.005883+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 163840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:56.006015+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 163840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:57.006163+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 163840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:58.006390+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 155648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:59.006607+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 155648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:00.006805+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 147456 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:01.007102+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 139264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:02.007243+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 139264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:03.007385+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 131072 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:04.007538+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 131072 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:05.007670+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 122880 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:06.007793+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 122880 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:07.007945+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 114688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:08.008097+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 114688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:09.008255+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 114688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:10.008419+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 106496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:11.008536+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 106496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:12.008759+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 98304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:13.009109+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 98304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:14.009290+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 98304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:15.009507+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 90112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:16.009668+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 90112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:17.009819+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 90112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:18.009979+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 81920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:19.010133+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 81920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:20.010295+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5531 writes, 23K keys, 5531 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5531 writes, 838 syncs, 6.60 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5531 writes, 23K keys, 5531 commit groups, 1.0 writes per commit group, ingest: 18.48 MB, 0.03 MB/s
                                           Interval WAL: 5531 writes, 838 syncs, 6.60 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 24576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:21.010458+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:22.010655+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:23.010849+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 8192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:24.011102+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 8192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:25.011286+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 8192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:26.011412+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 0 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:27.011538+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 0 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:28.011660+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 0 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:29.011769+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1040384 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:30.011888+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1040384 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:31.011989+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1040384 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:32.012115+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1032192 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:33.012243+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:34.012379+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1032192 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:35.012513+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:36.012638+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:37.012801+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:38.012914+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:39.018382+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:40.018524+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 1007616 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:41.018624+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 1007616 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:42.018765+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 1007616 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:43.018918+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 999424 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:44.019049+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 999424 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:45.019251+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 991232 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:46.019415+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 991232 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:47.019575+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 991232 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:48.019744+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:49.019951+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:50.020114+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 974848 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:51.020262+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 974848 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:52.020458+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:53.020605+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:54.020763+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:55.021058+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 950272 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:56.021231+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 950272 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:57.021404+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 950272 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:58.021532+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 942080 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:59.021771+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 942080 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:00.021961+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 942080 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:01.022728+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 933888 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:02.022893+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 933888 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:03.023062+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 933888 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:04.023252+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:05.023450+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:06.023646+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:07.023807+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 917504 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:08.023925+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 917504 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:09.024094+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 909312 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:10.024276+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 909312 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:11.024491+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 909312 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:12.024668+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:13.024825+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:14.024973+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:15.025190+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:16.025402+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:17.025553+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:18.025704+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 884736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:19.025844+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 884736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:20.026004+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:21.026148+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:22.026269+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:23.026456+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:24.026594+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:25.026764+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:26.027073+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 318.438812256s of 318.458435059s, submitted: 6
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:27.027250+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 1916928 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:28.027455+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 720896 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:29.027724+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 720896 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:30.027881+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 720896 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:31.028024+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 720896 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:32.028238+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 720896 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:33.029082+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 720896 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:34.029241+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 720896 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:35.029441+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 720896 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:36.029620+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 720896 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:37.030562+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 720896 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:38.030705+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 712704 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:39.030918+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 712704 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:40.031124+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 704512 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:41.031294+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 704512 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:42.031465+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 704512 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:43.031605+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 696320 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:44.031813+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 696320 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:45.032024+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 688128 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:46.032179+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 688128 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:47.032307+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 688128 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:48.032369+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 679936 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:49.032554+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 679936 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:50.032678+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 671744 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:51.032899+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 671744 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:52.033012+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 655360 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:53.033154+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 655360 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:54.033304+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 655360 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:55.033499+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 647168 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:56.033615+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 647168 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:57.033718+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 638976 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:58.033903+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 638976 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:59.034068+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 638976 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:00.034295+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 630784 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:01.034479+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 630784 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:02.034685+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 614400 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:03.034886+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 614400 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:04.035126+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 606208 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:05.035323+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 606208 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:06.035522+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 606208 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:07.035652+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 598016 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:08.035763+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 598016 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:09.035908+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 589824 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:10.036035+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 589824 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:11.036195+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 589824 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:12.036398+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 581632 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:13.036567+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 581632 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:14.036697+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 573440 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:15.036889+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 573440 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:16.037058+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 573440 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:17.037225+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 565248 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:18.037416+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 565248 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:19.037594+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 557056 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:20.037738+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 557056 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:21.037840+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:22.037993+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:23.038130+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:24.038275+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:25.038417+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:26.038607+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:27.038767+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:28.038923+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:29.039043+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:30.039202+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:31.039565+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:32.039725+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:33.039898+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:34.040061+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:35.040238+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:36.040387+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:37.040516+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 540672 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:38.040698+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 540672 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:39.040858+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 540672 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:40.040989+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 540672 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:41.041118+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 540672 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:42.041272+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 524288 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:43.041402+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 524288 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:44.041512+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 524288 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:45.041730+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 516096 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:46.041893+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 516096 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:47.042041+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 516096 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:48.042212+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 516096 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:49.042420+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 516096 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:50.042596+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 516096 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:51.042799+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 516096 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:52.042975+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 507904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:53.043141+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 507904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:54.043268+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 507904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:55.043383+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 507904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:56.043497+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 507904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:57.043648+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 507904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:58.043775+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 507904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:59.043922+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 507904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:00.044044+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 507904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:01.044182+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 507904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:02.044368+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 491520 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:03.044505+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 491520 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:04.044711+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 491520 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:05.044987+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 491520 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:06.045136+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 491520 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:07.045282+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 483328 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:08.045464+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 483328 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:09.045629+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 483328 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:10.045789+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 483328 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:11.045967+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 483328 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:12.046496+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 475136 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:13.046643+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 475136 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:14.046773+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 475136 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:15.046943+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 475136 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:16.047063+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 475136 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:17.047217+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 475136 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:18.047617+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 475136 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:19.047737+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 475136 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:20.047887+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 475136 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:21.048050+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:22.048199+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:23.048357+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:24.048462+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:25.048659+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:26.048776+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:27.048903+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:28.049049+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:29.049292+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:30.049473+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:31.049678+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:32.049793+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:33.050002+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 442368 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:34.050213+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 442368 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:35.050418+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 442368 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:36.050555+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 442368 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:37.050871+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 442368 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:38.051166+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 442368 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:39.051308+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 442368 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:40.051496+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 442368 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:41.051666+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:42.051785+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:43.052048+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:44.052213+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:45.052434+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:46.052600+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:47.052749+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:48.052866+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:49.052988+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:50.053423+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:51.053621+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:52.053820+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:53.054034+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:54.054247+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:55.054435+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:56.054706+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:57.055047+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:58.055207+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:59.055405+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:00.055603+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:01.055761+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:02.056003+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:03.056177+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:04.056387+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:05.056592+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:06.056721+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 401408 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:07.056857+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 401408 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:08.057023+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 401408 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:09.057181+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 401408 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:10.057385+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 401408 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:11.057556+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:12.057763+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:13.057974+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:14.058098+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:15.058290+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:16.058416+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:17.058539+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:18.058686+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:19.058830+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:20.058991+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:21.059159+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:22.059314+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:23.059480+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:24.059635+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:25.059820+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:26.059949+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:27.060096+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:28.060243+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:29.060395+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:30.060518+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:31.060660+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:32.060801+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:33.060936+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:34.061078+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:35.061285+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:36.061427+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:37.061605+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:38.061744+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:39.061919+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:40.062074+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:41.062242+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 352256 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:42.062552+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 352256 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:43.062870+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 352256 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:44.063144+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 352256 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:45.063401+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 352256 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:46.063551+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 434176 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:47.063684+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 434176 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:48.063803+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 434176 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:49.063935+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 434176 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:50.064045+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 434176 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:51.064205+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:52.064462+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:53.064679+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:54.064960+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:55.065156+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:56.065483+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:57.065689+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:58.065822+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:59.066028+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:00.066250+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:01.066544+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:02.067064+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:03.067355+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:04.067479+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:05.067779+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:06.067956+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:07.068152+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:08.068424+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:09.068613+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:10.068789+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:11.068957+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:12.069166+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:13.069427+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:14.069627+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:15.069999+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:16.070134+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:17.070275+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:18.070394+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:19.070645+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:20.070947+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:21.071235+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:22.071480+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:23.071664+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:24.071843+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:25.072085+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:26.072232+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: mgrc ms_handle_reset ms_handle_reset con 0x55660f1bfc00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1439243141
Oct 11 04:55:16 compute-0 ceph-osd[87458]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1439243141,v1:192.168.122.100:6801/1439243141]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: get_auth_request con 0x5566106b7c00 auth_method 0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: mgrc handle_mgr_configure stats_period=5
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:27.072396+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:28.072537+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:29.072660+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:30.072801+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:31.072953+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:32.073085+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 ms_handle_reset con 0x55660fbb4000 session 0x556610b88000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55661220f000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:33.073241+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:34.073437+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:35.073624+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:36.073812+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:37.073964+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:38.074092+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:39.074239+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:40.074419+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:41.074549+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:42.074735+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:43.074934+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:44.075097+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:45.075281+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:46.075389+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:47.075486+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:48.075603+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:49.075740+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:50.075917+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:51.076087+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:52.076278+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:53.076434+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:54.076557+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:55.076724+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:56.076862+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:57.077038+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:58.077247+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:59.077402+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:00.077686+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:01.077895+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:02.078067+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:03.078244+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:04.078390+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:05.078553+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:06.078765+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:07.079075+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:08.079238+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:09.079683+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:10.079984+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:11.080180+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:12.080363+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:13.080594+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:14.080746+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:15.081063+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3989414188' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:16.081213+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:17.081412+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:18.081583+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:19.081893+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:20.082225+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:21.082513+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:22.082714+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:23.082970+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:24.083185+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:25.083390+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:26.083616+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:27.083774+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:28.083942+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:29.084126+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:30.084310+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:31.084534+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:32.084689+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:33.084855+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:34.084985+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:35.085167+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:36.085282+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:37.085500+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:38.085637+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:39.085787+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:40.085952+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:41.086073+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:42.086218+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:43.086440+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:44.086552+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:45.086693+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:46.086894+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:47.087008+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:48.087126+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:49.087398+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:50.087575+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:51.087766+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:52.087917+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:53.088085+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:54.088218+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:55.088417+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:56.088603+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:57.088760+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:58.088889+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:59.089041+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:00.089170+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:01.089310+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:02.089478+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:03.089679+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:04.089862+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:05.090043+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:06.090208+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:07.090415+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:08.090593+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:09.090752+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:10.090942+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:11.091101+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:12.091258+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:13.091473+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:14.091595+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:15.091744+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:16.091896+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:17.092039+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:18.092182+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:19.092366+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:20.092503+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:21.092734+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:22.092865+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:23.092992+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:24.093135+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:25.093398+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:26.093587+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:27.093732+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:28.093852+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:29.094035+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:30.094156+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:31.094280+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:32.094424+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:33.094582+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:34.094727+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:35.094914+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:36.095002+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:37.095165+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:38.095324+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:39.095550+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:40.095710+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:41.095878+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:42.096036+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:43.096255+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:44.096419+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:45.096655+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:46.096825+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:47.097022+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:48.097171+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:49.097471+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:50.097614+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:51.097780+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:52.097921+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:53.098079+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:54.098242+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:55.098435+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:56.098589+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:57.098725+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:58.098878+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:59.099021+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:00.099168+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:01.099322+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:02.099499+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:03.099687+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:04.099876+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:05.100069+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:06.100220+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:07.100430+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:08.100570+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:09.100764+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:10.100901+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:11.101078+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:12.101245+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:13.101394+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:14.101552+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:15.101712+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:16.101832+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:17.101977+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:18.102108+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:19.102241+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:20.102396+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:21.102595+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:22.102752+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:23.102910+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:24.103018+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:25.103228+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:26.103406+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:27.103555+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:28.103796+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:29.104007+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:30.104182+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:31.104415+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:32.104535+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:33.104700+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:34.104892+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:35.105092+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:36.105258+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:37.105423+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:38.105603+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:39.105756+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:40.105871+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:41.106001+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 8192 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:42.106162+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 8192 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:43.106302+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 8192 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:44.106451+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 8192 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:45.106614+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 8192 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:46.106746+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:47.106917+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:48.107062+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:49.107229+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:50.107392+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:51.107610+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:52.107729+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:53.107876+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:54.108045+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:55.108170+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:56.108311+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:57.108523+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:58.108657+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:59.108791+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:00.108938+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:01.109090+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:02.109217+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:03.109397+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:04.109521+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:05.109689+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:06.109835+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:07.109967+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:08.110123+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:09.110293+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:10.110410+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:11.110535+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:12.110736+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:13.111407+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:14.111515+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:15.112456+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:16.112694+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:17.112857+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:18.113558+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:19.114092+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:20.114202+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:21.114314+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:22.114453+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:23.114604+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:24.114775+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:25.115253+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:26.115389+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:27.115615+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:28.115819+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:29.115957+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:30.116104+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:31.116414+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:32.116547+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:33.116687+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:34.116866+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:35.117132+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:36.117301+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:37.117481+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:38.117722+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:39.117893+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:40.118042+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:41.118180+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:42.118294+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:43.118381+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:44.118750+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:45.119365+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:46.119522+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:47.119719+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:48.120101+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:49.120233+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:50.120491+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:51.120683+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:52.120899+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:53.121232+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:54.121566+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:55.121745+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:56.121970+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:57.122157+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:58.122380+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:59.122636+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:00.122848+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:01.123014+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:02.123262+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:03.123507+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:04.123671+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:05.123876+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:06.124067+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:07.124280+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:08.124489+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:09.124725+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:10.124891+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:11.125067+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:12.125245+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:13.125408+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:14.125543+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:15.125795+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:16.125954+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:17.126125+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:18.126273+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:19.126535+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:20.126682+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5743 writes, 24K keys, 5743 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5743 writes, 944 syncs, 6.08 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:21.126816+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:22.126968+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:23.127167+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:24.127365+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:25.127634+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:26.127823+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:27.128006+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:28.128162+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:29.128378+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:30.128553+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:31.128737+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:32.128950+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:33.129129+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:34.129307+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:35.129542+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:36.129688+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:37.129821+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:38.129980+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:39.130131+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:40.130234+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:41.130379+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:42.130510+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:43.130731+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:44.130905+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:45.131187+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:46.131400+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:47.131596+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:48.131717+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:49.131897+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:50.132066+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:51.132174+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:52.132364+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:53.132526+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:54.132664+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:55.133310+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:56.133461+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:57.133663+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:58.133796+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:59.133969+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:00.134148+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:01.134312+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:02.134537+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:03.134712+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:04.134868+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:05.135084+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:06.135246+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:07.135409+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:08.135551+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:09.135753+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:10.135919+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:11.136096+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:12.136270+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:13.136429+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:14.136577+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:15.136744+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:16.136934+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:17.137136+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:18.137293+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:19.137521+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:20.137722+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:21.137915+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:22.138065+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:23.138217+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:24.138369+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:25.138641+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:26.138812+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 599.728332520s of 600.150024414s, submitted: 106
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 917504 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:27.139036+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:28.139216+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1024000 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:29.139464+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1024000 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:30.139626+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1024000 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:31.139816+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1024000 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:32.139962+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:33.140167+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:34.140367+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:35.140553+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:36.140733+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:37.140915+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:38.141103+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:39.141290+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:40.141534+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:41.141731+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:42.141902+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:43.142033+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:44.142191+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:45.142463+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:46.142645+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:47.142780+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:48.142957+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:49.143131+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:50.143298+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:51.143453+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:52.143610+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:53.143803+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:54.144067+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:55.144285+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:56.144542+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:57.144875+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:58.145042+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:59.145167+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:00.145310+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:01.145543+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:02.145667+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:03.145877+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:04.146011+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:05.146158+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:06.146323+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:07.146538+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:08.146755+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:09.146949+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:10.147087+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:11.147220+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:12.147407+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:13.147700+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:14.147897+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:15.148098+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:16.148261+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:17.148458+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:18.148675+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:19.148859+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:20.149034+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:21.149219+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:22.149433+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:23.149569+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:24.149711+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:25.149912+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:26.150046+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:27.150209+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:28.150344+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:29.150548+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:30.150711+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:31.150876+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:32.151032+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:33.151187+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:34.151370+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:35.151534+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:36.151629+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:37.151777+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:38.151932+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:39.152159+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:40.152287+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:41.152482+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:42.152677+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:43.152819+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:44.152970+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:45.153148+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:46.153324+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:47.153517+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:48.153657+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:49.153837+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:50.153986+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:51.154144+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:52.154451+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:53.154649+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:54.154827+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:55.155009+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:56.155138+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:57.155749+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:58.156687+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:59.157015+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:00.157389+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:01.158929+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:02.159436+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:03.159716+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:04.160452+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:05.163136+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:06.163315+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:07.163765+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:08.163903+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:09.164054+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:10.164499+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:11.164686+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:12.164836+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:13.165066+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:14.165246+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:15.165476+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:16.165668+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:17.165823+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:18.165996+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:19.166237+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:20.166450+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:21.166798+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:22.166957+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:23.167175+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:24.167377+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:25.167777+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:26.167957+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:27.168108+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:28.168428+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:29.168589+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:30.168812+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:31.169030+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:32.169213+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:33.169435+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:34.169579+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:35.169787+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:36.169941+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:37.170058+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:38.170256+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:39.170432+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:40.170611+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:41.170786+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:42.170929+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:43.171098+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:44.171269+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:45.171529+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:46.171720+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:47.171872+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:48.172035+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:49.172231+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:50.172433+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:51.172663+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:52.172850+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:53.173044+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:54.173244+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:55.173642+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:56.173841+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:57.174060+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:58.174250+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:59.174460+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:00.174620+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:01.174807+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:02.175603+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:03.176722+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:04.177416+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:05.177970+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:06.178712+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:07.179409+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:08.180071+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:09.180724+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:10.181226+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:11.181622+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:12.182009+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:13.182431+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:14.182815+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:15.183194+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:16.183344+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:17.183630+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:18.183824+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:19.184108+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:20.184401+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:21.184620+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:22.184809+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:23.185022+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:24.185241+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:25.185489+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:26.185706+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 835584 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:27.185919+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:28.186156+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:29.186390+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:30.186597+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:31.186829+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:32.187035+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:33.187230+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:34.187392+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:35.187618+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:36.187825+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:37.188010+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:38.188180+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:39.188402+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:40.188600+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:41.188838+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:42.189013+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:43.189196+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:44.189411+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:45.189676+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:46.189828+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:47.190015+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:48.190189+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:49.190426+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:50.190615+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:51.190751+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:52.190905+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:53.191110+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:54.191290+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:55.191534+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:56.191662+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:57.191827+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:58.191995+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:59.192234+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:00.192394+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:01.192627+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:02.192904+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:03.193136+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:04.193317+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:05.193588+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:06.193750+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:07.193948+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:08.194129+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:09.194448+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:10.194720+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:11.194975+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:12.195169+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:13.195412+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:14.195584+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:15.195824+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:16.196011+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:17.196203+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:18.196451+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:19.196683+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:20.196882+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:21.197056+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:22.197253+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:23.197436+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:24.197579+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:25.197806+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:26.197924+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 239.658248901s of 240.028640747s, submitted: 106
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac0000/0x0/0x4ffc00000, data 0xae69c/0x15d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 729088 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac0000/0x0/0x4ffc00000, data 0xae69c/0x15d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566127b5400
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:27.198081+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 589824 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 117 ms_handle_reset con 0x5566127b5400 session 0x55661291a3c0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:28.198218+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd70800
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1490944 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:29.198404+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 18112512 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 118 ms_handle_reset con 0x55660fd70800 session 0x55661291a960
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:30.198599+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 18096128 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929021 data_alloc: 218103808 data_used: 135168
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:31.198817+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 18096128 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fc2b2000/0x0/0x4ffc00000, data 0x8b3a3b/0x96b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:32.198986+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 18079744 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:33.199185+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 18079744 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:34.199393+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 18079744 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:35.199657+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 18079744 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929181 data_alloc: 218103808 data_used: 139264
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:36.199857+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fc2b2000/0x0/0x4ffc00000, data 0x8b3a3b/0x96b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.908299446s of 10.049924850s, submitted: 37
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 18071552 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:37.200067+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 18071552 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:38.200319+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 18071552 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:39.200559+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 119 heartbeat osd_stat(store_statfs(0x4fc2af000/0x0/0x4ffc00000, data 0x8b549e/0x96e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 18071552 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:40.200750+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 18071552 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931979 data_alloc: 218103808 data_used: 147456
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:41.200897+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 18071552 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:42.201002+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 18071552 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:43.201191+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 18071552 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 119 heartbeat osd_stat(store_statfs(0x4fc2af000/0x0/0x4ffc00000, data 0x8b549e/0x96e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:44.201394+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 18071552 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:45.201628+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 18071552 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:46.201792+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931979 data_alloc: 218103808 data_used: 147456
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 18055168 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 119 heartbeat osd_stat(store_statfs(0x4fc2af000/0x0/0x4ffc00000, data 0x8b549e/0x96e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:47.201963+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 18055168 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:48.202112+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 18055168 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:49.202268+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 18055168 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:50.202412+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 18055168 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:51.202579+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931979 data_alloc: 218103808 data_used: 147456
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 119 heartbeat osd_stat(store_statfs(0x4fc2af000/0x0/0x4ffc00000, data 0x8b549e/0x96e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 18055168 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:52.202710+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 18046976 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:53.202877+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 18046976 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:54.203064+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 18046976 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:55.203266+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 119 heartbeat osd_stat(store_statfs(0x4fc2af000/0x0/0x4ffc00000, data 0x8b549e/0x96e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 18046976 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:56.203418+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931979 data_alloc: 218103808 data_used: 147456
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 18046976 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:57.203604+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 18046976 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:58.203756+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 119 heartbeat osd_stat(store_statfs(0x4fc2af000/0x0/0x4ffc00000, data 0x8b549e/0x96e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 18063360 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:59.203949+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 18063360 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:00.204073+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 18063360 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:01.204268+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931979 data_alloc: 218103808 data_used: 147456
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 18063360 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:02.204422+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 18063360 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:03.204510+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 119 heartbeat osd_stat(store_statfs(0x4fc2af000/0x0/0x4ffc00000, data 0x8b549e/0x96e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 18063360 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:04.204668+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55661296e000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 119 ms_handle_reset con 0x55661296e000 session 0x55661291b680
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 18038784 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd70800
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.743133545s of 28.755020142s, submitted: 9
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:05.204854+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 120 ms_handle_reset con 0x55660fd70800 session 0x55661291b860
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 18030592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd71400
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:06.205017+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942124 data_alloc: 218103808 data_used: 147456
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 122 ms_handle_reset con 0x55660fd71400 session 0x55661291ba40
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 17940480 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:07.205168+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 17940480 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fc2a3000/0x0/0x4ffc00000, data 0x8bac96/0x978000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:08.205278+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5c00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 123 ms_handle_reset con 0x5566129a5c00 session 0x5566129c8b40
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 17907712 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:09.205451+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 17899520 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:10.205569+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 17899520 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:11.205706+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956266 data_alloc: 218103808 data_used: 151552
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 17899520 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc299000/0x0/0x4ffc00000, data 0x8bff1d/0x982000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:12.205966+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 17899520 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5800
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:13.206187+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5400
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 17784832 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 126 ms_handle_reset con 0x5566129a5800 session 0x5566129c8f00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 126 ms_handle_reset con 0x5566129a5400 session 0x556612988780
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:14.206435+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd70800
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 17752064 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:15.206636+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.967695236s of 10.158306122s, submitted: 44
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 127 ms_handle_reset con 0x55660fd70800 session 0x5566129c94a0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd71400
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 17711104 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5800
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fc290000/0x0/0x4ffc00000, data 0x8c3b28/0x98c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 128 ms_handle_reset con 0x55660fd71400 session 0x5566129c9a40
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:16.206777+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976629 data_alloc: 218103808 data_used: 159744
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 128 ms_handle_reset con 0x5566129a5800 session 0x556612988f00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5c00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 17661952 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 128 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4c00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 129 ms_handle_reset con 0x5566129a4c00 session 0x55660f8e6d20
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:17.207020+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55661220f400
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 129 ms_handle_reset con 0x55661220f400 session 0x55660fec3a40
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 129 ms_handle_reset con 0x5566129a5c00 session 0x556612988780
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 91586560 unmapped: 9035776 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd70800
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:18.207205+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 130 ms_handle_reset con 0x55660fd70800 session 0x55660fec3680
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 24666112 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd71400
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:19.207423+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 131 ms_handle_reset con 0x5566129a5000 session 0x5566129894a0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 131 ms_handle_reset con 0x55660fd71400 session 0x556612822b40
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 24649728 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:20.207797+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4c00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5800
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 24649728 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:21.207946+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264389 data_alloc: 218103808 data_used: 172032
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9a82000/0x0/0x4ffc00000, data 0x30cbf8d/0x319b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 133 ms_handle_reset con 0x5566129a5800 session 0x556612989a40
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 133 ms_handle_reset con 0x5566129a4c00 session 0x5566128230e0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 23617536 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd70800
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd71400
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:22.208113+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 134 ms_handle_reset con 0x55660fd70800 session 0x5566128234a0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 23576576 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:23.208488+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 135 ms_handle_reset con 0x55660fd71400 session 0x556611d4cd20
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc27e000/0x0/0x4ffc00000, data 0x8cf326/0x99c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 23486464 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:24.208641+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 23486464 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:25.208906+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc27e000/0x0/0x4ffc00000, data 0x8d0f14/0x99e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.198155403s of 10.192216873s, submitted: 203
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 23470080 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:26.209066+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001658 data_alloc: 218103808 data_used: 180224
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 137 ms_handle_reset con 0x5566129a5000 session 0x5566129c94a0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 23470080 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:27.209231+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 23470080 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:28.209443+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 23470080 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:29.209594+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc278000/0x0/0x4ffc00000, data 0x8d45bc/0x9a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5c00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 22388736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 138 ms_handle_reset con 0x5566129a5c00 session 0x55661291a960
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:30.209785+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5c00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 22388736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:31.209941+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017062 data_alloc: 218103808 data_used: 188416
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 140 ms_handle_reset con 0x5566129a5c00 session 0x55661291b680
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd70800
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 140 ms_handle_reset con 0x55660fd70800 session 0x556612989a40
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd71400
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 22323200 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:32.210078+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 22290432 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4c00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:33.210211+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 142 ms_handle_reset con 0x5566129a4c00 session 0x556610b88000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 142 ms_handle_reset con 0x55660fd71400 session 0x5566129885a0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 142 ms_handle_reset con 0x5566129a5000 session 0x556612989680
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 22274048 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:34.210401+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc267000/0x0/0x4ffc00000, data 0x8dd011/0x9b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 22274048 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:35.210578+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc267000/0x0/0x4ffc00000, data 0x8dd011/0x9b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 22274048 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:36.210761+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023257 data_alloc: 218103808 data_used: 192512
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 22265856 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:37.210908+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 22265856 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:38.211035+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 22265856 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:39.211197+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc267000/0x0/0x4ffc00000, data 0x8dd011/0x9b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 22265856 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.400314331s of 14.649963379s, submitted: 66
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:40.212387+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc265000/0x0/0x4ffc00000, data 0x8deaac/0x9b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 22265856 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:41.212516+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025367 data_alloc: 218103808 data_used: 192512
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 22347776 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 143 ms_handle_reset con 0x5566129a5000 session 0x55660fec2f00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:42.212703+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd70800
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 143 ms_handle_reset con 0x55660fd70800 session 0x55660fec30e0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 22355968 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:43.212872+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd71400
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 143 ms_handle_reset con 0x55660fd71400 session 0x556611d4cd20
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4c00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 22331392 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:44.213027+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 22323200 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:45.213190+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 145 ms_handle_reset con 0x5566129a4c00 session 0x556612822b40
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc262000/0x0/0x4ffc00000, data 0x8e0629/0x9bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 22298624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:46.213383+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035270 data_alloc: 218103808 data_used: 192512
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 22298624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:47.213548+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 22298624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:48.213705+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 22298624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:49.213872+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5c00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 22323200 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc25d000/0x0/0x4ffc00000, data 0x8e221d/0x9bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.932152748s of 10.016571999s, submitted: 44
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55661220ec00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:50.214007+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 146 ms_handle_reset con 0x5566129a5c00 session 0x5566128234a0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 22323200 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc25d000/0x0/0x4ffc00000, data 0x8e221d/0x9bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 147 ms_handle_reset con 0x55661220ec00 session 0x55660f8e72c0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:51.214143+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042827 data_alloc: 218103808 data_used: 200704
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 22315008 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd70800
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:52.214296+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 147 ms_handle_reset con 0x55660fd70800 session 0x5566128a0f00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 22519808 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd71400
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 147 ms_handle_reset con 0x55660fd71400 session 0x5566128a14a0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4c00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:53.214437+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 22478848 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fc258000/0x0/0x4ffc00000, data 0x8e593f/0x9c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 148 ms_handle_reset con 0x5566129a4c00 session 0x5566128a1680
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:54.214550+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 22470656 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:55.214809+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 22470656 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:56.214914+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 149 ms_handle_reset con 0x5566129a5000 session 0x5566128a1a40
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046958 data_alloc: 218103808 data_used: 204800
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 22437888 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:57.215075+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 22437888 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:58.215204+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 22536192 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:59.215428+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc24f000/0x0/0x4ffc00000, data 0x8eacb2/0x9cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 22503424 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:00.215595+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.250770569s of 10.486794472s, submitted: 80
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 22503424 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:01.215737+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053066 data_alloc: 218103808 data_used: 208896
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 22503424 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fc24d000/0x0/0x4ffc00000, data 0x8ec74d/0x9d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:02.215897+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd70800
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 151 ms_handle_reset con 0x55660fd70800 session 0x55661251f2c0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd71400
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 151 ms_handle_reset con 0x55660fd71400 session 0x55661251f860
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55661220ec00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 151 ms_handle_reset con 0x55661220ec00 session 0x55661251fe00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4c00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 151 ms_handle_reset con 0x5566129a4c00 session 0x5566128a0f00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566127b5400
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 151 ms_handle_reset con 0x5566127b5400 session 0x5566128a1a40
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 22544384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:03.216006+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566127b5400
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 151 ms_handle_reset con 0x5566127b5400 session 0x5566128234a0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd70800
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 151 ms_handle_reset con 0x55660fd70800 session 0x556612823c20
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 22544384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd71400
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 151 ms_handle_reset con 0x55660fd71400 session 0x55660f8e6000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:04.216192+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5800
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 151 ms_handle_reset con 0x5566129a5800 session 0x55660f94c780
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 22528000 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:05.216424+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 22528000 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:06.216567+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1055802 data_alloc: 218103808 data_used: 212992
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 22528000 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:07.216717+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 22528000 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fc24d000/0x0/0x4ffc00000, data 0x8ec770/0x9d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:08.216886+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5c00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 151 ms_handle_reset con 0x5566129a5c00 session 0x55660ff910e0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5c00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 152 ms_handle_reset con 0x5566129a5c00 session 0x55661251ef00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5800
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4c00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 152 ms_handle_reset con 0x5566129a4c00 session 0x55660fec3e00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 152 ms_handle_reset con 0x5566129a5800 session 0x556612823c20
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 152 ms_handle_reset con 0x5566129a4000 session 0x55661251f860
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566127b5400
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 22405120 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:09.217036+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 152 handle_osd_map epochs [153,153], i have 153, src has [1,153]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 153 ms_handle_reset con 0x5566127b5400 session 0x556610a523c0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566127b5400
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 22396928 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:10.217156+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 154 ms_handle_reset con 0x5566127b5400 session 0x556610a532c0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fc23e000/0x0/0x4ffc00000, data 0x8f1aeb/0x9dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22372352 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:11.217285+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072695 data_alloc: 218103808 data_used: 225280
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 154 ms_handle_reset con 0x5566129a4000 session 0x556610a53680
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4c00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 154 ms_handle_reset con 0x5566129a4c00 session 0x556610a53c20
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5800
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 154 ms_handle_reset con 0x5566129a5800 session 0x556610a53e00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 22364160 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:12.217455+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5c00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.109927177s of 12.313597679s, submitted: 80
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 154 ms_handle_reset con 0x5566129a5c00 session 0x55661279a1e0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 22364160 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:13.217616+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566127b5400
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 22331392 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 155 ms_handle_reset con 0x5566127b5400 session 0x55661279a5a0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:14.217772+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 22315008 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:15.217998+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 156 ms_handle_reset con 0x5566129a5000 session 0x556611d4cd20
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4000
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 22282240 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:16.218237+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 157 ms_handle_reset con 0x5566129a4000 session 0x556610a554a0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077787 data_alloc: 218103808 data_used: 241664
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fc23c000/0x0/0x4ffc00000, data 0x8f50c3/0x9e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 22282240 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:17.218650+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 22282240 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:18.219040+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 22282240 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:19.219300+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 22282240 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:20.219420+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4c00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 158 ms_handle_reset con 0x5566129a4c00 session 0x55660fec3e00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 22282240 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:21.219656+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fc238000/0x0/0x4ffc00000, data 0x8f86b1/0x9e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079041 data_alloc: 218103808 data_used: 237568
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fc238000/0x0/0x4ffc00000, data 0x8f86b1/0x9e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78372864 unmapped: 22249472 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:22.219834+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5800
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 158 ms_handle_reset con 0x5566129a5800 session 0x55661251fe00
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5800
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 22241280 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:23.220154+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.601550102s of 10.930751801s, submitted: 156
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 159 ms_handle_reset con 0x5566129a5800 session 0x5566128234a0
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 22241280 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:24.220350+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 22241280 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:25.220766+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 22241280 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:26.220919+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082413 data_alloc: 218103808 data_used: 233472
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc237000/0x0/0x4ffc00000, data 0x8fa240/0x9e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 22241280 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:27.221165+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 22241280 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:28.221345+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 22241280 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:29.221462+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc237000/0x0/0x4ffc00000, data 0x8fa240/0x9e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:30.221605+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:31.221746+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:32.222045+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:33.222412+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:34.222617+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:35.222821+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:36.222955+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:37.223223+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:38.223438+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:39.223648+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:40.223858+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:41.224009+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:42.224165+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:43.224319+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:44.224484+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:45.224654+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:46.224791+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:47.224966+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:48.225135+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:49.225311+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:50.225478+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:51.225626+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:52.225784+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:53.225907+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:54.226065+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:55.226257+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:56.226408+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:57.226595+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:58.226753+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:59.226971+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:00.227151+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:01.227254+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:02.227416+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:03.227717+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:04.227928+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:05.228168+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:06.228534+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:07.228768+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:08.228896+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:09.229072+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:10.229235+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:11.229522+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:12.229746+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:13.229955+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:14.230093+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:15.230279+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:16.230476+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:17.230623+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:18.230861+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:19.231065+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:20.232532+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:21.232697+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:22.233183+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:23.233741+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:24.234721+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:25.234899+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:26.235036+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:27.235179+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:28.256005+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:29.256307+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:30.257157+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:31.257291+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:32.257396+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:33.257521+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:34.257653+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:35.257795+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:36.257897+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:37.258053+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:38.258172+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:39.258571+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:40.258701+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:41.258878+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:55:16 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:55:16 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:42.259696+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 21184512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:43.259807+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 21045248 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: do_command 'config diff' '{prefix=config diff}'
Oct 11 04:55:16 compute-0 ceph-osd[87458]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 11 04:55:16 compute-0 ceph-osd[87458]: do_command 'config show' '{prefix=config show}'
Oct 11 04:55:16 compute-0 ceph-osd[87458]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:44.259923+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: do_command 'counter dump' '{prefix=counter dump}'
Oct 11 04:55:16 compute-0 ceph-osd[87458]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 11 04:55:16 compute-0 ceph-osd[87458]: do_command 'counter schema' '{prefix=counter schema}'
Oct 11 04:55:16 compute-0 ceph-osd[87458]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 20684800 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:45.260065+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 20512768 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 04:55:16 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:46.260177+0000)
Oct 11 04:55:16 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 20430848 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 04:55:16 compute-0 ceph-osd[87458]: do_command 'log dump' '{prefix=log dump}'
Oct 11 04:55:17 compute-0 ceph-mon[74243]: from='client.14885 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:17 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2223956878' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 11 04:55:17 compute-0 ceph-mon[74243]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 11 04:55:17 compute-0 ceph-mon[74243]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 11 04:55:17 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3989414188' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 11 04:55:17 compute-0 ceph-mon[74243]: pgmap v1031: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:17 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:55:17 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14897 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:17 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct 11 04:55:17 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/665498601' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 11 04:55:17 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 11 04:55:17 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1874455923' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 11 04:55:18 compute-0 ceph-mon[74243]: from='client.14897 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:18 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/665498601' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 11 04:55:18 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1874455923' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 11 04:55:18 compute-0 nova_compute[259400]: 2025-10-11 04:55:18.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:55:18 compute-0 nova_compute[259400]: 2025-10-11 04:55:18.196 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:55:18 compute-0 nova_compute[259400]: 2025-10-11 04:55:18.197 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:55:18 compute-0 nova_compute[259400]: 2025-10-11 04:55:18.217 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:55:18 compute-0 nova_compute[259400]: 2025-10-11 04:55:18.218 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:55:18 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 11 04:55:18 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3128384825' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 11 04:55:18 compute-0 sudo[273114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:55:18 compute-0 sudo[273114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:55:18 compute-0 sudo[273114]: pam_unix(sudo:session): session closed for user root
Oct 11 04:55:18 compute-0 sudo[273151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:55:18 compute-0 sudo[273151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:55:18 compute-0 sudo[273151]: pam_unix(sudo:session): session closed for user root
Oct 11 04:55:18 compute-0 sudo[273200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:55:18 compute-0 sudo[273200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:55:18 compute-0 sudo[273200]: pam_unix(sudo:session): session closed for user root
Oct 11 04:55:18 compute-0 sudo[273227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:55:18 compute-0 sudo[273227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:55:18 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 11 04:55:18 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2149585436' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 11 04:55:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:19 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3128384825' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 11 04:55:19 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2149585436' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 11 04:55:19 compute-0 ceph-mon[74243]: pgmap v1032: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:19 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14907 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:19 compute-0 sudo[273227]: pam_unix(sudo:session): session closed for user root
Oct 11 04:55:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:55:19 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:55:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:55:19 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:55:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:55:19 compute-0 nova_compute[259400]: 2025-10-11 04:55:19.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:55:19 compute-0 nova_compute[259400]: 2025-10-11 04:55:19.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:55:19 compute-0 nova_compute[259400]: 2025-10-11 04:55:19.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:55:19 compute-0 nova_compute[259400]: 2025-10-11 04:55:19.196 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:55:19 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:55:19 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev fa50a995-eae6-4258-be88-d63e5c067359 does not exist
Oct 11 04:55:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:55:19 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 641388cb-8dbc-4f73-bef5-c0e6be238402 does not exist
Oct 11 04:55:19 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:55:19 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev c36ee1f6-149f-4356-82c3-40f39080b397 does not exist
Oct 11 04:55:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:55:19 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:55:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:55:19 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:55:19 compute-0 sudo[273323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:55:19 compute-0 sudo[273323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:55:19 compute-0 sudo[273323]: pam_unix(sudo:session): session closed for user root
Oct 11 04:55:19 compute-0 sudo[273352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:55:19 compute-0 sudo[273352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:55:19 compute-0 sudo[273352]: pam_unix(sudo:session): session closed for user root
Oct 11 04:55:19 compute-0 systemd[1]: Starting Hostname Service...
Oct 11 04:55:19 compute-0 sudo[273406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:55:19 compute-0 sudo[273406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:55:19 compute-0 sudo[273406]: pam_unix(sudo:session): session closed for user root
Oct 11 04:55:19 compute-0 sudo[273442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:55:19 compute-0 sudo[273442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:55:19 compute-0 systemd[1]: Started Hostname Service.
Oct 11 04:55:19 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 11 04:55:19 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/741190760' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 11 04:55:19 compute-0 podman[273576]: 2025-10-11 04:55:19.889024955 +0000 UTC m=+0.069109603 container create e4bc36f082e641df8125f266f57c781c6d3f9aa5dfdfe89694262cd9bfbbf5a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:55:19 compute-0 systemd[1]: Started libpod-conmon-e4bc36f082e641df8125f266f57c781c6d3f9aa5dfdfe89694262cd9bfbbf5a5.scope.
Oct 11 04:55:19 compute-0 podman[273576]: 2025-10-11 04:55:19.85372479 +0000 UTC m=+0.033809458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:55:19 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:55:19 compute-0 podman[273576]: 2025-10-11 04:55:19.969683878 +0000 UTC m=+0.149768546 container init e4bc36f082e641df8125f266f57c781c6d3f9aa5dfdfe89694262cd9bfbbf5a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:55:19 compute-0 podman[273576]: 2025-10-11 04:55:19.976626982 +0000 UTC m=+0.156711640 container start e4bc36f082e641df8125f266f57c781c6d3f9aa5dfdfe89694262cd9bfbbf5a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_morse, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:55:19 compute-0 dazzling_morse[273593]: 167 167
Oct 11 04:55:19 compute-0 systemd[1]: libpod-e4bc36f082e641df8125f266f57c781c6d3f9aa5dfdfe89694262cd9bfbbf5a5.scope: Deactivated successfully.
Oct 11 04:55:19 compute-0 podman[273576]: 2025-10-11 04:55:19.982771796 +0000 UTC m=+0.162856474 container attach e4bc36f082e641df8125f266f57c781c6d3f9aa5dfdfe89694262cd9bfbbf5a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_morse, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:55:19 compute-0 podman[273576]: 2025-10-11 04:55:19.983529915 +0000 UTC m=+0.163614563 container died e4bc36f082e641df8125f266f57c781c6d3f9aa5dfdfe89694262cd9bfbbf5a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:55:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 11 04:55:20 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/603983647' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 11 04:55:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dac710c8f5efcb759f6aad51d28bf7c7db2ce5bb2b0bd193cf508448dbab556-merged.mount: Deactivated successfully.
Oct 11 04:55:20 compute-0 podman[273576]: 2025-10-11 04:55:20.045640552 +0000 UTC m=+0.225725220 container remove e4bc36f082e641df8125f266f57c781c6d3f9aa5dfdfe89694262cd9bfbbf5a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_morse, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:55:20 compute-0 systemd[1]: libpod-conmon-e4bc36f082e641df8125f266f57c781c6d3f9aa5dfdfe89694262cd9bfbbf5a5.scope: Deactivated successfully.
Oct 11 04:55:20 compute-0 ceph-mon[74243]: from='client.14907 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:55:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:55:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:55:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:55:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:55:20 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:55:20 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/741190760' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 11 04:55:20 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/603983647' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 11 04:55:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:20 compute-0 podman[273641]: 2025-10-11 04:55:20.214094416 +0000 UTC m=+0.037715797 container create fd05d7668356f8ce9fb804d0564cead2a9601337e8a30c1d0ba8b82db57bb7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:55:20 compute-0 systemd[1]: Started libpod-conmon-fd05d7668356f8ce9fb804d0564cead2a9601337e8a30c1d0ba8b82db57bb7b6.scope.
Oct 11 04:55:20 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:55:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1c4fc2d76be50bc6b2fb0af5d1b26d34779a2491c3207a20062214d0d8cc92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1c4fc2d76be50bc6b2fb0af5d1b26d34779a2491c3207a20062214d0d8cc92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1c4fc2d76be50bc6b2fb0af5d1b26d34779a2491c3207a20062214d0d8cc92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1c4fc2d76be50bc6b2fb0af5d1b26d34779a2491c3207a20062214d0d8cc92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1c4fc2d76be50bc6b2fb0af5d1b26d34779a2491c3207a20062214d0d8cc92/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:20 compute-0 podman[273641]: 2025-10-11 04:55:20.285017344 +0000 UTC m=+0.108638725 container init fd05d7668356f8ce9fb804d0564cead2a9601337e8a30c1d0ba8b82db57bb7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 11 04:55:20 compute-0 podman[273641]: 2025-10-11 04:55:20.19631897 +0000 UTC m=+0.019940371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:55:20 compute-0 podman[273641]: 2025-10-11 04:55:20.295349343 +0000 UTC m=+0.118970704 container start fd05d7668356f8ce9fb804d0564cead2a9601337e8a30c1d0ba8b82db57bb7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:55:20 compute-0 podman[273641]: 2025-10-11 04:55:20.298653736 +0000 UTC m=+0.122275137 container attach fd05d7668356f8ce9fb804d0564cead2a9601337e8a30c1d0ba8b82db57bb7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_poitras, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:55:20 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14913 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 11 04:55:20 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3115638197' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 11 04:55:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:21 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14917 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:21 compute-0 ceph-mon[74243]: from='client.14913 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:21 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3115638197' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 11 04:55:21 compute-0 ceph-mon[74243]: pgmap v1033: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:21 compute-0 nova_compute[259400]: 2025-10-11 04:55:21.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:55:21 compute-0 nova_compute[259400]: 2025-10-11 04:55:21.197 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:55:21 compute-0 nova_compute[259400]: 2025-10-11 04:55:21.236 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:55:21 compute-0 nova_compute[259400]: 2025-10-11 04:55:21.236 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:55:21 compute-0 nova_compute[259400]: 2025-10-11 04:55:21.237 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:55:21 compute-0 nova_compute[259400]: 2025-10-11 04:55:21.237 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:55:21 compute-0 nova_compute[259400]: 2025-10-11 04:55:21.238 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:55:21 compute-0 laughing_poitras[273658]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:55:21 compute-0 laughing_poitras[273658]: --> relative data size: 1.0
Oct 11 04:55:21 compute-0 laughing_poitras[273658]: --> All data devices are unavailable
Oct 11 04:55:21 compute-0 systemd[1]: libpod-fd05d7668356f8ce9fb804d0564cead2a9601337e8a30c1d0ba8b82db57bb7b6.scope: Deactivated successfully.
Oct 11 04:55:21 compute-0 podman[273641]: 2025-10-11 04:55:21.355656276 +0000 UTC m=+1.179277657 container died fd05d7668356f8ce9fb804d0564cead2a9601337e8a30c1d0ba8b82db57bb7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_poitras, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:55:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa1c4fc2d76be50bc6b2fb0af5d1b26d34779a2491c3207a20062214d0d8cc92-merged.mount: Deactivated successfully.
Oct 11 04:55:21 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14919 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:21 compute-0 podman[273641]: 2025-10-11 04:55:21.425173159 +0000 UTC m=+1.248794540 container remove fd05d7668356f8ce9fb804d0564cead2a9601337e8a30c1d0ba8b82db57bb7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_poitras, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 04:55:21 compute-0 systemd[1]: libpod-conmon-fd05d7668356f8ce9fb804d0564cead2a9601337e8a30c1d0ba8b82db57bb7b6.scope: Deactivated successfully.
Oct 11 04:55:21 compute-0 sudo[273442]: pam_unix(sudo:session): session closed for user root
Oct 11 04:55:21 compute-0 sudo[273809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:55:21 compute-0 sudo[273809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:55:21 compute-0 sudo[273809]: pam_unix(sudo:session): session closed for user root
Oct 11 04:55:21 compute-0 sudo[273836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:55:21 compute-0 sudo[273836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:55:21 compute-0 sudo[273836]: pam_unix(sudo:session): session closed for user root
Oct 11 04:55:21 compute-0 sudo[273884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:55:21 compute-0 sudo[273884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:55:21 compute-0 sudo[273884]: pam_unix(sudo:session): session closed for user root
Oct 11 04:55:21 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:55:21 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1614876205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:55:21 compute-0 nova_compute[259400]: 2025-10-11 04:55:21.679 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:55:21 compute-0 sudo[273914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:55:21 compute-0 sudo[273914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:55:21 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 11 04:55:21 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1791150294' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 11 04:55:21 compute-0 nova_compute[259400]: 2025-10-11 04:55:21.820 2 WARNING nova.virt.libvirt.driver [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:55:21 compute-0 nova_compute[259400]: 2025-10-11 04:55:21.821 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4896MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:55:21 compute-0 nova_compute[259400]: 2025-10-11 04:55:21.821 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:55:21 compute-0 nova_compute[259400]: 2025-10-11 04:55:21.821 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:55:21 compute-0 nova_compute[259400]: 2025-10-11 04:55:21.908 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:55:21 compute-0 nova_compute[259400]: 2025-10-11 04:55:21.908 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:55:21 compute-0 nova_compute[259400]: 2025-10-11 04:55:21.928 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:55:22 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct 11 04:55:22 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1755294924' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 11 04:55:22 compute-0 podman[274026]: 2025-10-11 04:55:22.158569896 +0000 UTC m=+0.112376318 container create 8d4b180bae4f3354d872c3cce6225e70e98977d8cadb1ea1e863c36b91d4a22d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cartwright, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 04:55:22 compute-0 ceph-mon[74243]: from='client.14917 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:22 compute-0 ceph-mon[74243]: from='client.14919 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:22 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1614876205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:55:22 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1791150294' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 11 04:55:22 compute-0 podman[274026]: 2025-10-11 04:55:22.071128394 +0000 UTC m=+0.024934806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:55:22 compute-0 systemd[1]: Started libpod-conmon-8d4b180bae4f3354d872c3cce6225e70e98977d8cadb1ea1e863c36b91d4a22d.scope.
Oct 11 04:55:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:55:22 compute-0 podman[274026]: 2025-10-11 04:55:22.261382903 +0000 UTC m=+0.215189325 container init 8d4b180bae4f3354d872c3cce6225e70e98977d8cadb1ea1e863c36b91d4a22d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cartwright, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:55:22 compute-0 podman[274026]: 2025-10-11 04:55:22.271104427 +0000 UTC m=+0.224910829 container start 8d4b180bae4f3354d872c3cce6225e70e98977d8cadb1ea1e863c36b91d4a22d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cartwright, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:55:22 compute-0 podman[274026]: 2025-10-11 04:55:22.274383129 +0000 UTC m=+0.228189541 container attach 8d4b180bae4f3354d872c3cce6225e70e98977d8cadb1ea1e863c36b91d4a22d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cartwright, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:55:22 compute-0 peaceful_cartwright[274073]: 167 167
Oct 11 04:55:22 compute-0 systemd[1]: libpod-8d4b180bae4f3354d872c3cce6225e70e98977d8cadb1ea1e863c36b91d4a22d.scope: Deactivated successfully.
Oct 11 04:55:22 compute-0 podman[274026]: 2025-10-11 04:55:22.277417085 +0000 UTC m=+0.231223487 container died 8d4b180bae4f3354d872c3cce6225e70e98977d8cadb1ea1e863c36b91d4a22d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 04:55:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dd3f298154179bd5d628a37eaedd3acae17b12db88e5a671f15381e05267a85-merged.mount: Deactivated successfully.
Oct 11 04:55:22 compute-0 podman[274026]: 2025-10-11 04:55:22.315067719 +0000 UTC m=+0.268874121 container remove 8d4b180bae4f3354d872c3cce6225e70e98977d8cadb1ea1e863c36b91d4a22d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cartwright, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:55:22 compute-0 systemd[1]: libpod-conmon-8d4b180bae4f3354d872c3cce6225e70e98977d8cadb1ea1e863c36b91d4a22d.scope: Deactivated successfully.
Oct 11 04:55:22 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:55:22 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/124629524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:55:22 compute-0 nova_compute[259400]: 2025-10-11 04:55:22.369 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:55:22 compute-0 nova_compute[259400]: 2025-10-11 04:55:22.375 2 DEBUG nova.compute.provider_tree [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f05a244-23b6-4149-9b5a-a525e5860d18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:55:22 compute-0 nova_compute[259400]: 2025-10-11 04:55:22.402 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:55:22 compute-0 nova_compute[259400]: 2025-10-11 04:55:22.403 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:55:22 compute-0 nova_compute[259400]: 2025-10-11 04:55:22.403 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14929 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:22 compute-0 podman[274126]: 2025-10-11 04:55:22.511200656 +0000 UTC m=+0.077391751 container create 1aa2a11d88f5d9536af3874519cf777ec034f914076b3c2726a7c594c0ba51df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:55:22 compute-0 podman[274126]: 2025-10-11 04:55:22.468371092 +0000 UTC m=+0.034562187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:55:22 compute-0 systemd[1]: Started libpod-conmon-1aa2a11d88f5d9536af3874519cf777ec034f914076b3c2726a7c594c0ba51df.scope.
Oct 11 04:55:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32050c1be8eda515d4e132cfafc1a4e7d8231dd32086b266ca6f3864ff49d5df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32050c1be8eda515d4e132cfafc1a4e7d8231dd32086b266ca6f3864ff49d5df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32050c1be8eda515d4e132cfafc1a4e7d8231dd32086b266ca6f3864ff49d5df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32050c1be8eda515d4e132cfafc1a4e7d8231dd32086b266ca6f3864ff49d5df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:22 compute-0 podman[274126]: 2025-10-11 04:55:22.637216355 +0000 UTC m=+0.203407490 container init 1aa2a11d88f5d9536af3874519cf777ec034f914076b3c2726a7c594c0ba51df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:55:22 compute-0 podman[274126]: 2025-10-11 04:55:22.646701373 +0000 UTC m=+0.212892468 container start 1aa2a11d88f5d9536af3874519cf777ec034f914076b3c2726a7c594c0ba51df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_edison, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 04:55:22 compute-0 podman[274126]: 2025-10-11 04:55:22.652639802 +0000 UTC m=+0.218830927 container attach 1aa2a11d88f5d9536af3874519cf777ec034f914076b3c2726a7c594c0ba51df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_edison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14931 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:22 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:55:23 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1755294924' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 11 04:55:23 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/124629524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:55:23 compute-0 ceph-mon[74243]: from='client.14929 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:23 compute-0 ceph-mon[74243]: pgmap v1034: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct 11 04:55:23 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2300993597' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 11 04:55:23 compute-0 elastic_edison[274151]: {
Oct 11 04:55:23 compute-0 elastic_edison[274151]:     "0": [
Oct 11 04:55:23 compute-0 elastic_edison[274151]:         {
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "devices": [
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "/dev/loop3"
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             ],
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "lv_name": "ceph_lv0",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "lv_size": "21470642176",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "name": "ceph_lv0",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "tags": {
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.cluster_name": "ceph",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.crush_device_class": "",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.encrypted": "0",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.osd_id": "0",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.type": "block",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.vdo": "0"
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             },
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "type": "block",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "vg_name": "ceph_vg0"
Oct 11 04:55:23 compute-0 elastic_edison[274151]:         }
Oct 11 04:55:23 compute-0 elastic_edison[274151]:     ],
Oct 11 04:55:23 compute-0 elastic_edison[274151]:     "1": [
Oct 11 04:55:23 compute-0 elastic_edison[274151]:         {
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "devices": [
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "/dev/loop4"
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             ],
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "lv_name": "ceph_lv1",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "lv_size": "21470642176",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "name": "ceph_lv1",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "tags": {
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.cluster_name": "ceph",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.crush_device_class": "",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.encrypted": "0",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.osd_id": "1",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.type": "block",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.vdo": "0"
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             },
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "type": "block",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "vg_name": "ceph_vg1"
Oct 11 04:55:23 compute-0 elastic_edison[274151]:         }
Oct 11 04:55:23 compute-0 elastic_edison[274151]:     ],
Oct 11 04:55:23 compute-0 elastic_edison[274151]:     "2": [
Oct 11 04:55:23 compute-0 elastic_edison[274151]:         {
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "devices": [
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "/dev/loop5"
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             ],
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "lv_name": "ceph_lv2",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "lv_size": "21470642176",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "name": "ceph_lv2",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "tags": {
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.cluster_name": "ceph",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.crush_device_class": "",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.encrypted": "0",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.osd_id": "2",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.type": "block",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:                 "ceph.vdo": "0"
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             },
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "type": "block",
Oct 11 04:55:23 compute-0 elastic_edison[274151]:             "vg_name": "ceph_vg2"
Oct 11 04:55:23 compute-0 elastic_edison[274151]:         }
Oct 11 04:55:23 compute-0 elastic_edison[274151]:     ]
Oct 11 04:55:23 compute-0 elastic_edison[274151]: }
Oct 11 04:55:23 compute-0 systemd[1]: libpod-1aa2a11d88f5d9536af3874519cf777ec034f914076b3c2726a7c594c0ba51df.scope: Deactivated successfully.
Oct 11 04:55:23 compute-0 podman[274126]: 2025-10-11 04:55:23.362905319 +0000 UTC m=+0.929096374 container died 1aa2a11d88f5d9536af3874519cf777ec034f914076b3c2726a7c594c0ba51df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_edison, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:55:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-32050c1be8eda515d4e132cfafc1a4e7d8231dd32086b266ca6f3864ff49d5df-merged.mount: Deactivated successfully.
Oct 11 04:55:23 compute-0 nova_compute[259400]: 2025-10-11 04:55:23.399 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:55:23 compute-0 nova_compute[259400]: 2025-10-11 04:55:23.400 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:55:23 compute-0 podman[274126]: 2025-10-11 04:55:23.413874407 +0000 UTC m=+0.980065462 container remove 1aa2a11d88f5d9536af3874519cf777ec034f914076b3c2726a7c594c0ba51df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 04:55:23 compute-0 systemd[1]: libpod-conmon-1aa2a11d88f5d9536af3874519cf777ec034f914076b3c2726a7c594c0ba51df.scope: Deactivated successfully.
Oct 11 04:55:23 compute-0 sudo[273914]: pam_unix(sudo:session): session closed for user root
Oct 11 04:55:23 compute-0 sudo[274355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:55:23 compute-0 sudo[274355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:55:23 compute-0 sudo[274355]: pam_unix(sudo:session): session closed for user root
Oct 11 04:55:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct 11 04:55:23 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3387984450' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 11 04:55:23 compute-0 sudo[274400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:55:23 compute-0 sudo[274400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:55:23 compute-0 sudo[274400]: pam_unix(sudo:session): session closed for user root
Oct 11 04:55:23 compute-0 sudo[274436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:55:23 compute-0 sudo[274436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:55:23 compute-0 sudo[274436]: pam_unix(sudo:session): session closed for user root
Oct 11 04:55:23 compute-0 sudo[274474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:55:23 compute-0 sudo[274474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:55:23 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14937 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:24 compute-0 podman[274601]: 2025-10-11 04:55:24.105779634 +0000 UTC m=+0.042494196 container create 91d3ce45d6f2d5f61f6c475aa82936920ec764630987607c384514ad6930b406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chaum, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:55:24 compute-0 systemd[1]: Started libpod-conmon-91d3ce45d6f2d5f61f6c475aa82936920ec764630987607c384514ad6930b406.scope.
Oct 11 04:55:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:55:24 compute-0 podman[274601]: 2025-10-11 04:55:24.184275722 +0000 UTC m=+0.120990304 container init 91d3ce45d6f2d5f61f6c475aa82936920ec764630987607c384514ad6930b406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:55:24 compute-0 podman[274601]: 2025-10-11 04:55:24.09007084 +0000 UTC m=+0.026785422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:55:24 compute-0 podman[274601]: 2025-10-11 04:55:24.190346404 +0000 UTC m=+0.127060966 container start 91d3ce45d6f2d5f61f6c475aa82936920ec764630987607c384514ad6930b406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:55:24 compute-0 ceph-mon[74243]: from='client.14931 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:24 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2300993597' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 11 04:55:24 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3387984450' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 11 04:55:24 compute-0 busy_chaum[274658]: 167 167
Oct 11 04:55:24 compute-0 systemd[1]: libpod-91d3ce45d6f2d5f61f6c475aa82936920ec764630987607c384514ad6930b406.scope: Deactivated successfully.
Oct 11 04:55:24 compute-0 podman[274601]: 2025-10-11 04:55:24.196995161 +0000 UTC m=+0.133709743 container attach 91d3ce45d6f2d5f61f6c475aa82936920ec764630987607c384514ad6930b406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:55:24 compute-0 podman[274601]: 2025-10-11 04:55:24.197426302 +0000 UTC m=+0.134140874 container died 91d3ce45d6f2d5f61f6c475aa82936920ec764630987607c384514ad6930b406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:55:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ce9ec40607f7beffa1206fb19b1e9f8ce68c8f3fbbcb37983cebfde8527ae35-merged.mount: Deactivated successfully.
Oct 11 04:55:24 compute-0 podman[274601]: 2025-10-11 04:55:24.233688781 +0000 UTC m=+0.170403343 container remove 91d3ce45d6f2d5f61f6c475aa82936920ec764630987607c384514ad6930b406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chaum, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:55:24 compute-0 systemd[1]: libpod-conmon-91d3ce45d6f2d5f61f6c475aa82936920ec764630987607c384514ad6930b406.scope: Deactivated successfully.
Oct 11 04:55:24 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14939 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:24 compute-0 podman[274725]: 2025-10-11 04:55:24.400504623 +0000 UTC m=+0.038303121 container create a32af45672e74b1e8d5faea276af37ab625ecac3866ace320524bfe078526dc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:55:24 compute-0 systemd[1]: Started libpod-conmon-a32af45672e74b1e8d5faea276af37ab625ecac3866ace320524bfe078526dc9.scope.
Oct 11 04:55:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:55:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27e5a5acf187802ecfadb71a7093ad4f7131eefd1132e5595efe31711be72e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27e5a5acf187802ecfadb71a7093ad4f7131eefd1132e5595efe31711be72e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27e5a5acf187802ecfadb71a7093ad4f7131eefd1132e5595efe31711be72e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27e5a5acf187802ecfadb71a7093ad4f7131eefd1132e5595efe31711be72e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:24 compute-0 podman[274725]: 2025-10-11 04:55:24.473995836 +0000 UTC m=+0.111794384 container init a32af45672e74b1e8d5faea276af37ab625ecac3866ace320524bfe078526dc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:55:24 compute-0 podman[274725]: 2025-10-11 04:55:24.382143293 +0000 UTC m=+0.019941801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:55:24 compute-0 podman[274725]: 2025-10-11 04:55:24.48092463 +0000 UTC m=+0.118723128 container start a32af45672e74b1e8d5faea276af37ab625ecac3866ace320524bfe078526dc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:55:24 compute-0 podman[274725]: 2025-10-11 04:55:24.484354526 +0000 UTC m=+0.122153024 container attach a32af45672e74b1e8d5faea276af37ab625ecac3866ace320524bfe078526dc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kepler, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:55:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 11 04:55:24 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3336452920' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 04:55:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Oct 11 04:55:25 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2563917500' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 11 04:55:25 compute-0 ceph-mon[74243]: from='client.14937 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:25 compute-0 ceph-mon[74243]: from='client.14939 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:55:25 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3336452920' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 04:55:25 compute-0 ceph-mon[74243]: pgmap v1035: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:25 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2563917500' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 11 04:55:25 compute-0 gallant_kepler[274760]: {
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:         "osd_id": 1,
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:         "type": "bluestore"
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:     },
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:         "osd_id": 0,
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:         "type": "bluestore"
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:     },
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:         "osd_id": 2,
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:         "type": "bluestore"
Oct 11 04:55:25 compute-0 gallant_kepler[274760]:     }
Oct 11 04:55:25 compute-0 gallant_kepler[274760]: }
Oct 11 04:55:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Oct 11 04:55:25 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1535938470' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 11 04:55:25 compute-0 systemd[1]: libpod-a32af45672e74b1e8d5faea276af37ab625ecac3866ace320524bfe078526dc9.scope: Deactivated successfully.
Oct 11 04:55:25 compute-0 systemd[1]: libpod-a32af45672e74b1e8d5faea276af37ab625ecac3866ace320524bfe078526dc9.scope: Consumed 1.005s CPU time.
Oct 11 04:55:25 compute-0 podman[275154]: 2025-10-11 04:55:25.524702488 +0000 UTC m=+0.168539636 container died a32af45672e74b1e8d5faea276af37ab625ecac3866ace320524bfe078526dc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 04:55:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-d27e5a5acf187802ecfadb71a7093ad4f7131eefd1132e5595efe31711be72e9-merged.mount: Deactivated successfully.
Oct 11 04:55:25 compute-0 podman[275239]: 2025-10-11 04:55:25.605131515 +0000 UTC m=+0.082823918 container remove a32af45672e74b1e8d5faea276af37ab625ecac3866ace320524bfe078526dc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:55:25 compute-0 systemd[1]: libpod-conmon-a32af45672e74b1e8d5faea276af37ab625ecac3866ace320524bfe078526dc9.scope: Deactivated successfully.
Oct 11 04:55:25 compute-0 sudo[274474]: pam_unix(sudo:session): session closed for user root
Oct 11 04:55:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:55:25 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:55:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:55:25 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:55:25 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev aee5c96d-0677-45ec-8fed-81f664a5ec50 does not exist
Oct 11 04:55:25 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev f99ac49e-90fa-4756-b237-adaa9b2096e2 does not exist
Oct 11 04:55:25 compute-0 sudo[275328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:55:25 compute-0 sudo[275328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:55:25 compute-0 sudo[275328]: pam_unix(sudo:session): session closed for user root
Oct 11 04:55:25 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14947 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:25 compute-0 sudo[275385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:55:25 compute-0 sudo[275385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:55:25 compute-0 sudo[275385]: pam_unix(sudo:session): session closed for user root
Oct 11 04:55:26 compute-0 ovs-appctl[275508]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 11 04:55:26 compute-0 ovs-appctl[275512]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 11 04:55:26 compute-0 ovs-appctl[275518]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 11 04:55:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:55:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:55:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:55:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:55:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:55:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:55:26 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1535938470' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 11 04:55:26 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:55:26 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:55:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 11 04:55:26 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3721021808' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 11 04:55:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Oct 11 04:55:26 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1592770630' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 11 04:55:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:27 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Oct 11 04:55:27 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2961326037' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 11 04:55:27 compute-0 ceph-mon[74243]: from='client.14947 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:27 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3721021808' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 11 04:55:27 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1592770630' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 11 04:55:27 compute-0 ceph-mon[74243]: pgmap v1036: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:27 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2961326037' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 11 04:55:27 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Oct 11 04:55:27 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1057068709' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 11 04:55:27 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14957 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Oct 11 04:55:28 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3440181199' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 11 04:55:28 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1057068709' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 11 04:55:28 compute-0 podman[276408]: 2025-10-11 04:55:28.429554846 +0000 UTC m=+0.063719689 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 04:55:28 compute-0 podman[276406]: 2025-10-11 04:55:28.455434385 +0000 UTC m=+0.094691475 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:55:28 compute-0 podman[276402]: 2025-10-11 04:55:28.47919249 +0000 UTC m=+0.118464521 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 11 04:55:28 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Oct 11 04:55:28 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2108828367' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 11 04:55:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:29 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14963 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:29 compute-0 ceph-mon[74243]: from='client.14957 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:29 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3440181199' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 11 04:55:29 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2108828367' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 11 04:55:29 compute-0 ceph-mon[74243]: pgmap v1037: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:29 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Oct 11 04:55:29 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/290881912' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 11 04:55:29 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14967 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:30 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14969 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:30 compute-0 ceph-mon[74243]: from='client.14963 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:30 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/290881912' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 11 04:55:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Oct 11 04:55:30 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3950989993' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 11 04:55:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Oct 11 04:55:30 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2017083814' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14975 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:31 compute-0 ceph-mon[74243]: from='client.14967 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:31 compute-0 ceph-mon[74243]: from='client.14969 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:31 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3950989993' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 11 04:55:31 compute-0 ceph-mon[74243]: pgmap v1038: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:31 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2017083814' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14977 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:31 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:55:32 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 11 04:55:32 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/781712618' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 11 04:55:32 compute-0 ceph-mon[74243]: from='client.14975 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:32 compute-0 ceph-mon[74243]: from='client.14977 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:32 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/781712618' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 11 04:55:32 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Oct 11 04:55:32 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/923974807' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 11 04:55:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:32 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14983 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:33 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.14985 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:33 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/923974807' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 11 04:55:33 compute-0 ceph-mon[74243]: pgmap v1039: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:33 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 04:55:33 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/815127458' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:55:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Oct 11 04:55:34 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4049928388' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 11 04:55:34 compute-0 ceph-mon[74243]: from='client.14983 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:34 compute-0 ceph-mon[74243]: from='client.14985 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:55:34 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/815127458' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:55:34 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4049928388' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 11 04:55:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:35 compute-0 ceph-mon[74243]: pgmap v1040: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:35 compute-0 virtqemud[259122]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 11 04:55:35 compute-0 podman[277258]: 2025-10-11 04:55:35.705238735 +0000 UTC m=+0.105395143 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:55:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:36 compute-0 ceph-mon[74243]: pgmap v1041: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:38 compute-0 systemd[1]: Starting Time & Date Service...
Oct 11 04:55:38 compute-0 systemd[1]: Started Time & Date Service.
Oct 11 04:55:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:38 compute-0 ceph-mon[74243]: pgmap v1042: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:40 compute-0 ceph-mon[74243]: pgmap v1043: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:42 compute-0 ceph-mon[74243]: pgmap v1044: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:44 compute-0 ceph-mon[74243]: pgmap v1045: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:46 compute-0 ceph-mon[74243]: pgmap v1046: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:48 compute-0 ceph-mon[74243]: pgmap v1047: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:50 compute-0 ceph-mon[74243]: pgmap v1048: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:52 compute-0 ceph-mon[74243]: pgmap v1049: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:54 compute-0 ceph-mon[74243]: pgmap v1050: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:55:56
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'volumes', 'vms', '.rgw.root', '.mgr', 'images', 'default.rgw.control']
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:55:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:56 compute-0 ceph-mon[74243]: pgmap v1051: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:57 compute-0 ceph-mgr[74542]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1439243141
Oct 11 04:55:57 compute-0 rsyslogd[1004]: imjournal: 29545 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 11 04:55:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:58 compute-0 ceph-mon[74243]: pgmap v1052: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:59 compute-0 podman[277463]: 2025-10-11 04:55:59.440169045 +0000 UTC m=+0.096507231 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 11 04:55:59 compute-0 podman[277464]: 2025-10-11 04:55:59.455560201 +0000 UTC m=+0.094944512 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:55:59 compute-0 podman[277462]: 2025-10-11 04:55:59.46472791 +0000 UTC m=+0.120306097 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 11 04:56:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:56:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:00 compute-0 ceph-mon[74243]: pgmap v1053: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:03 compute-0 ceph-mon[74243]: pgmap v1054: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:03 compute-0 sudo[269897]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:03 compute-0 sshd-session[269896]: Received disconnect from 192.168.122.10 port 47194:11: disconnected by user
Oct 11 04:56:03 compute-0 sshd-session[269896]: Disconnected from user zuul 192.168.122.10 port 47194
Oct 11 04:56:03 compute-0 sshd-session[269893]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:56:03 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Oct 11 04:56:03 compute-0 systemd[1]: session-54.scope: Consumed 2min 28.440s CPU time, 777.4M memory peak, read 318.3M from disk, written 171.4M to disk.
Oct 11 04:56:03 compute-0 systemd-logind[801]: Session 54 logged out. Waiting for processes to exit.
Oct 11 04:56:03 compute-0 systemd-logind[801]: Removed session 54.
Oct 11 04:56:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:56:03 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/724246231' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:56:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:56:03 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/724246231' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:56:03 compute-0 sshd-session[277525]: Accepted publickey for zuul from 192.168.122.10 port 38992 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:56:03 compute-0 systemd-logind[801]: New session 55 of user zuul.
Oct 11 04:56:03 compute-0 systemd[1]: Started Session 55 of User zuul.
Oct 11 04:56:03 compute-0 sshd-session[277525]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:56:03 compute-0 sudo[277529]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-10-11-gofkwui.tar.xz
Oct 11 04:56:03 compute-0 sudo[277529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:56:03 compute-0 sudo[277529]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:03 compute-0 sshd-session[277528]: Received disconnect from 192.168.122.10 port 38992:11: disconnected by user
Oct 11 04:56:03 compute-0 sshd-session[277528]: Disconnected from user zuul 192.168.122.10 port 38992
Oct 11 04:56:03 compute-0 sshd-session[277525]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:56:03 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Oct 11 04:56:03 compute-0 systemd-logind[801]: Session 55 logged out. Waiting for processes to exit.
Oct 11 04:56:03 compute-0 systemd-logind[801]: Removed session 55.
Oct 11 04:56:03 compute-0 sshd-session[277554]: Accepted publickey for zuul from 192.168.122.10 port 38996 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 04:56:03 compute-0 systemd-logind[801]: New session 56 of user zuul.
Oct 11 04:56:03 compute-0 systemd[1]: Started Session 56 of User zuul.
Oct 11 04:56:03 compute-0 sshd-session[277554]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:56:03 compute-0 sudo[277558]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Oct 11 04:56:03 compute-0 sudo[277558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:56:03 compute-0 sudo[277558]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:03 compute-0 sshd-session[277557]: Received disconnect from 192.168.122.10 port 38996:11: disconnected by user
Oct 11 04:56:03 compute-0 sshd-session[277557]: Disconnected from user zuul 192.168.122.10 port 38996
Oct 11 04:56:03 compute-0 sshd-session[277554]: pam_unix(sshd:session): session closed for user zuul
Oct 11 04:56:03 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Oct 11 04:56:03 compute-0 systemd-logind[801]: Session 56 logged out. Waiting for processes to exit.
Oct 11 04:56:03 compute-0 systemd-logind[801]: Removed session 56.
Oct 11 04:56:04 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/724246231' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:56:04 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/724246231' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:56:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:05 compute-0 ceph-mon[74243]: pgmap v1055: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:56:06 compute-0 podman[277583]: 2025-10-11 04:56:06.458513733 +0000 UTC m=+0.108675386 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 11 04:56:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:06 compute-0 ceph-mon[74243]: pgmap v1056: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:08 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 11 04:56:08 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 11 04:56:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:08 compute-0 ceph-mon[74243]: pgmap v1057: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:56:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:10 compute-0 ceph-mon[74243]: pgmap v1058: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:56:11.022 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:56:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:56:11.023 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:56:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:56:11.023 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:56:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:12 compute-0 ceph-mon[74243]: pgmap v1059: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:14 compute-0 nova_compute[259400]: 2025-10-11 04:56:14.197 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:56:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:14 compute-0 ceph-mon[74243]: pgmap v1060: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:56:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:16 compute-0 ceph-mon[74243]: pgmap v1061: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:17 compute-0 nova_compute[259400]: 2025-10-11 04:56:17.218 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:56:17 compute-0 nova_compute[259400]: 2025-10-11 04:56:17.218 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 04:56:17 compute-0 nova_compute[259400]: 2025-10-11 04:56:17.246 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 04:56:18 compute-0 nova_compute[259400]: 2025-10-11 04:56:18.225 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:56:18 compute-0 nova_compute[259400]: 2025-10-11 04:56:18.225 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:56:18 compute-0 nova_compute[259400]: 2025-10-11 04:56:18.225 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:56:18 compute-0 nova_compute[259400]: 2025-10-11 04:56:18.242 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:56:18 compute-0 nova_compute[259400]: 2025-10-11 04:56:18.242 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:56:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:18 compute-0 ceph-mon[74243]: pgmap v1062: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:19 compute-0 nova_compute[259400]: 2025-10-11 04:56:19.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:56:19 compute-0 nova_compute[259400]: 2025-10-11 04:56:19.221 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:56:19 compute-0 nova_compute[259400]: 2025-10-11 04:56:19.222 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:56:19 compute-0 nova_compute[259400]: 2025-10-11 04:56:19.222 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:56:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:56:20 compute-0 nova_compute[259400]: 2025-10-11 04:56:20.197 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:56:20 compute-0 nova_compute[259400]: 2025-10-11 04:56:20.197 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 04:56:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:20 compute-0 ceph-mon[74243]: pgmap v1063: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:21 compute-0 nova_compute[259400]: 2025-10-11 04:56:21.222 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:56:22 compute-0 nova_compute[259400]: 2025-10-11 04:56:22.191 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:56:22 compute-0 nova_compute[259400]: 2025-10-11 04:56:22.195 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:56:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:22 compute-0 ceph-mon[74243]: pgmap v1064: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:23 compute-0 nova_compute[259400]: 2025-10-11 04:56:23.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:56:23 compute-0 nova_compute[259400]: 2025-10-11 04:56:23.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:56:23 compute-0 nova_compute[259400]: 2025-10-11 04:56:23.231 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:56:23 compute-0 nova_compute[259400]: 2025-10-11 04:56:23.232 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:56:23 compute-0 nova_compute[259400]: 2025-10-11 04:56:23.232 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:56:23 compute-0 nova_compute[259400]: 2025-10-11 04:56:23.232 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:56:23 compute-0 nova_compute[259400]: 2025-10-11 04:56:23.233 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:56:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:56:23 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4196805083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:56:23 compute-0 nova_compute[259400]: 2025-10-11 04:56:23.668 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:56:23 compute-0 nova_compute[259400]: 2025-10-11 04:56:23.844 2 WARNING nova.virt.libvirt.driver [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:56:23 compute-0 nova_compute[259400]: 2025-10-11 04:56:23.845 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5024MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:56:23 compute-0 nova_compute[259400]: 2025-10-11 04:56:23.846 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:56:23 compute-0 nova_compute[259400]: 2025-10-11 04:56:23.846 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:56:23 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4196805083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:56:24 compute-0 nova_compute[259400]: 2025-10-11 04:56:24.176 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:56:24 compute-0 nova_compute[259400]: 2025-10-11 04:56:24.177 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:56:24 compute-0 nova_compute[259400]: 2025-10-11 04:56:24.296 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:56:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:56:24 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3861443782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:56:24 compute-0 nova_compute[259400]: 2025-10-11 04:56:24.793 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:56:24 compute-0 nova_compute[259400]: 2025-10-11 04:56:24.801 2 DEBUG nova.compute.provider_tree [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f05a244-23b6-4149-9b5a-a525e5860d18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:56:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:24 compute-0 nova_compute[259400]: 2025-10-11 04:56:24.824 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:56:24 compute-0 nova_compute[259400]: 2025-10-11 04:56:24.827 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:56:24 compute-0 nova_compute[259400]: 2025-10-11 04:56:24.827 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:56:24 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3861443782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:56:24 compute-0 ceph-mon[74243]: pgmap v1065: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:56:25 compute-0 sudo[277650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:56:25 compute-0 sudo[277650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:25 compute-0 sudo[277650]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:26 compute-0 sudo[277675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:56:26 compute-0 sudo[277675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:26 compute-0 sudo[277675]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:56:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:56:26 compute-0 sudo[277700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:56:26 compute-0 sudo[277700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:56:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:56:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:56:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:56:26 compute-0 sudo[277700]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:26 compute-0 sudo[277725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:56:26 compute-0 sudo[277725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:26 compute-0 sudo[277725]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:26 compute-0 ceph-mon[74243]: pgmap v1066: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:26 compute-0 sudo[277780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:56:26 compute-0 sudo[277780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:26 compute-0 sudo[277780]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:27 compute-0 sudo[277805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:56:27 compute-0 sudo[277805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:27 compute-0 sudo[277805]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:27 compute-0 sudo[277830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:56:27 compute-0 sudo[277830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:27 compute-0 sudo[277830]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:27 compute-0 sudo[277855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 11 04:56:27 compute-0 sudo[277855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:27 compute-0 sudo[277855]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:27 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:56:27 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:56:27 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:56:27 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:56:27 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:56:27 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:56:27 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:56:27 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:56:27 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:56:27 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:56:27 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 86391386-73cf-4df6-8c6e-1ef7ce2c2c36 does not exist
Oct 11 04:56:27 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev e665feeb-ea99-4511-9c5a-98aa62105efb does not exist
Oct 11 04:56:27 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 078e8874-df9a-4899-8110-b2dd6c6a0161 does not exist
Oct 11 04:56:27 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:56:27 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:56:27 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:56:27 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:56:27 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:56:27 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:56:27 compute-0 sudo[277899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:56:27 compute-0 sudo[277899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:27 compute-0 sudo[277899]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:27 compute-0 sudo[277924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:56:27 compute-0 sudo[277924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:27 compute-0 sudo[277924]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:27 compute-0 sudo[277949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:56:27 compute-0 sudo[277949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:27 compute-0 sudo[277949]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:27 compute-0 sudo[277974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:56:27 compute-0 sudo[277974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:28 compute-0 podman[278040]: 2025-10-11 04:56:28.182854258 +0000 UTC m=+0.043506582 container create daf1a73d9384368738fd523ebad41d77176c7c6e15931dd79ca12bcad0112069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_turing, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:56:28 compute-0 systemd[1]: Started libpod-conmon-daf1a73d9384368738fd523ebad41d77176c7c6e15931dd79ca12bcad0112069.scope.
Oct 11 04:56:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:56:28 compute-0 podman[278040]: 2025-10-11 04:56:28.163626236 +0000 UTC m=+0.024278570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:56:28 compute-0 podman[278040]: 2025-10-11 04:56:28.269872849 +0000 UTC m=+0.130525263 container init daf1a73d9384368738fd523ebad41d77176c7c6e15931dd79ca12bcad0112069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:56:28 compute-0 podman[278040]: 2025-10-11 04:56:28.28185756 +0000 UTC m=+0.142509884 container start daf1a73d9384368738fd523ebad41d77176c7c6e15931dd79ca12bcad0112069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_turing, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:56:28 compute-0 podman[278040]: 2025-10-11 04:56:28.286719572 +0000 UTC m=+0.147371936 container attach daf1a73d9384368738fd523ebad41d77176c7c6e15931dd79ca12bcad0112069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:56:28 compute-0 brave_turing[278056]: 167 167
Oct 11 04:56:28 compute-0 systemd[1]: libpod-daf1a73d9384368738fd523ebad41d77176c7c6e15931dd79ca12bcad0112069.scope: Deactivated successfully.
Oct 11 04:56:28 compute-0 podman[278040]: 2025-10-11 04:56:28.291690806 +0000 UTC m=+0.152343130 container died daf1a73d9384368738fd523ebad41d77176c7c6e15931dd79ca12bcad0112069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_turing, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:56:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-58fccf26f256e6264f2351cda934467a33b3800fa6bd31f5df39de4f5a9d91b4-merged.mount: Deactivated successfully.
Oct 11 04:56:28 compute-0 podman[278040]: 2025-10-11 04:56:28.329799922 +0000 UTC m=+0.190452246 container remove daf1a73d9384368738fd523ebad41d77176c7c6e15931dd79ca12bcad0112069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_turing, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:56:28 compute-0 systemd[1]: libpod-conmon-daf1a73d9384368738fd523ebad41d77176c7c6e15931dd79ca12bcad0112069.scope: Deactivated successfully.
Oct 11 04:56:28 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:56:28 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:56:28 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:56:28 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:56:28 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:56:28 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:56:28 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:56:28 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:56:28 compute-0 podman[278080]: 2025-10-11 04:56:28.567368618 +0000 UTC m=+0.064027376 container create 8baeb3af9f309dcae697eb5971b4b5733fbfcdea02e26f4c9d37e892f2606f1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:56:28 compute-0 podman[278080]: 2025-10-11 04:56:28.540212877 +0000 UTC m=+0.036871725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:56:28 compute-0 systemd[1]: Started libpod-conmon-8baeb3af9f309dcae697eb5971b4b5733fbfcdea02e26f4c9d37e892f2606f1a.scope.
Oct 11 04:56:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7b529be90779a5f190222368a7ca402c27b36af7229b1ff5b720de0a24a108/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7b529be90779a5f190222368a7ca402c27b36af7229b1ff5b720de0a24a108/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7b529be90779a5f190222368a7ca402c27b36af7229b1ff5b720de0a24a108/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7b529be90779a5f190222368a7ca402c27b36af7229b1ff5b720de0a24a108/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7b529be90779a5f190222368a7ca402c27b36af7229b1ff5b720de0a24a108/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:56:28 compute-0 podman[278080]: 2025-10-11 04:56:28.689007818 +0000 UTC m=+0.185666586 container init 8baeb3af9f309dcae697eb5971b4b5733fbfcdea02e26f4c9d37e892f2606f1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_franklin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:56:28 compute-0 podman[278080]: 2025-10-11 04:56:28.699220654 +0000 UTC m=+0.195879422 container start 8baeb3af9f309dcae697eb5971b4b5733fbfcdea02e26f4c9d37e892f2606f1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_franklin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:56:28 compute-0 podman[278080]: 2025-10-11 04:56:28.703896121 +0000 UTC m=+0.200554899 container attach 8baeb3af9f309dcae697eb5971b4b5733fbfcdea02e26f4c9d37e892f2606f1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_franklin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:56:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:29 compute-0 mystifying_franklin[278096]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:56:29 compute-0 mystifying_franklin[278096]: --> relative data size: 1.0
Oct 11 04:56:29 compute-0 mystifying_franklin[278096]: --> All data devices are unavailable
Oct 11 04:56:29 compute-0 systemd[1]: libpod-8baeb3af9f309dcae697eb5971b4b5733fbfcdea02e26f4c9d37e892f2606f1a.scope: Deactivated successfully.
Oct 11 04:56:29 compute-0 systemd[1]: libpod-8baeb3af9f309dcae697eb5971b4b5733fbfcdea02e26f4c9d37e892f2606f1a.scope: Consumed 1.107s CPU time.
Oct 11 04:56:29 compute-0 podman[278080]: 2025-10-11 04:56:29.855607386 +0000 UTC m=+1.352266174 container died 8baeb3af9f309dcae697eb5971b4b5733fbfcdea02e26f4c9d37e892f2606f1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_franklin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:56:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b7b529be90779a5f190222368a7ca402c27b36af7229b1ff5b720de0a24a108-merged.mount: Deactivated successfully.
Oct 11 04:56:29 compute-0 podman[278080]: 2025-10-11 04:56:29.935863178 +0000 UTC m=+1.432521936 container remove 8baeb3af9f309dcae697eb5971b4b5733fbfcdea02e26f4c9d37e892f2606f1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_franklin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:56:29 compute-0 systemd[1]: libpod-conmon-8baeb3af9f309dcae697eb5971b4b5733fbfcdea02e26f4c9d37e892f2606f1a.scope: Deactivated successfully.
Oct 11 04:56:29 compute-0 sudo[277974]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:30 compute-0 podman[278127]: 2025-10-11 04:56:30.003077243 +0000 UTC m=+0.107647070 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 11 04:56:30 compute-0 podman[278126]: 2025-10-11 04:56:30.029975528 +0000 UTC m=+0.143968291 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 04:56:30 compute-0 podman[278134]: 2025-10-11 04:56:30.032113381 +0000 UTC m=+0.125999840 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:56:30 compute-0 sudo[278188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:56:30 compute-0 sudo[278188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:30 compute-0 sudo[278188]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:30 compute-0 sudo[278219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:56:30 compute-0 sudo[278219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:30 compute-0 sudo[278219]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:56:30 compute-0 sudo[278244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:56:30 compute-0 sudo[278244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:30 compute-0 sudo[278244]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:30 compute-0 sudo[278269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:56:30 compute-0 sudo[278269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:30 compute-0 ceph-mon[74243]: pgmap v1067: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:30 compute-0 podman[278335]: 2025-10-11 04:56:30.712458637 +0000 UTC m=+0.058441406 container create 75e3bcb5c96df8fb305e6f615a0ac781503f7c6353bb51c3e13dfaf928c1bc1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cerf, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:56:30 compute-0 systemd[1]: Started libpod-conmon-75e3bcb5c96df8fb305e6f615a0ac781503f7c6353bb51c3e13dfaf928c1bc1b.scope.
Oct 11 04:56:30 compute-0 podman[278335]: 2025-10-11 04:56:30.693516482 +0000 UTC m=+0.039499241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:56:30 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:56:30 compute-0 podman[278335]: 2025-10-11 04:56:30.809628714 +0000 UTC m=+0.155611543 container init 75e3bcb5c96df8fb305e6f615a0ac781503f7c6353bb51c3e13dfaf928c1bc1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:56:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:30 compute-0 podman[278335]: 2025-10-11 04:56:30.821369638 +0000 UTC m=+0.167352407 container start 75e3bcb5c96df8fb305e6f615a0ac781503f7c6353bb51c3e13dfaf928c1bc1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cerf, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:56:30 compute-0 kind_cerf[278351]: 167 167
Oct 11 04:56:30 compute-0 podman[278335]: 2025-10-11 04:56:30.826547788 +0000 UTC m=+0.172530557 container attach 75e3bcb5c96df8fb305e6f615a0ac781503f7c6353bb51c3e13dfaf928c1bc1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cerf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:56:30 compute-0 systemd[1]: libpod-75e3bcb5c96df8fb305e6f615a0ac781503f7c6353bb51c3e13dfaf928c1bc1b.scope: Deactivated successfully.
Oct 11 04:56:30 compute-0 podman[278335]: 2025-10-11 04:56:30.827657186 +0000 UTC m=+0.173639925 container died 75e3bcb5c96df8fb305e6f615a0ac781503f7c6353bb51c3e13dfaf928c1bc1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cerf, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:56:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cc607872a4a760f40264111a97840ca10ae6516467d6c4f3f0c802333b5deb1-merged.mount: Deactivated successfully.
Oct 11 04:56:30 compute-0 podman[278335]: 2025-10-11 04:56:30.869649208 +0000 UTC m=+0.215631947 container remove 75e3bcb5c96df8fb305e6f615a0ac781503f7c6353bb51c3e13dfaf928c1bc1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:56:30 compute-0 systemd[1]: libpod-conmon-75e3bcb5c96df8fb305e6f615a0ac781503f7c6353bb51c3e13dfaf928c1bc1b.scope: Deactivated successfully.
Oct 11 04:56:31 compute-0 podman[278373]: 2025-10-11 04:56:31.134497148 +0000 UTC m=+0.078392676 container create 2f91966c113760e48477eb26f3ad82c8f614803119d6f19807065664fc1359cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackburn, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:56:31 compute-0 podman[278373]: 2025-10-11 04:56:31.102132527 +0000 UTC m=+0.046028095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:56:31 compute-0 systemd[1]: Started libpod-conmon-2f91966c113760e48477eb26f3ad82c8f614803119d6f19807065664fc1359cb.scope.
Oct 11 04:56:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd182325e5d637d383758a6ab8ea1e3b7ca2a8bae579eefed0f6f289633e0261/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd182325e5d637d383758a6ab8ea1e3b7ca2a8bae579eefed0f6f289633e0261/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd182325e5d637d383758a6ab8ea1e3b7ca2a8bae579eefed0f6f289633e0261/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd182325e5d637d383758a6ab8ea1e3b7ca2a8bae579eefed0f6f289633e0261/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:56:31 compute-0 podman[278373]: 2025-10-11 04:56:31.260835466 +0000 UTC m=+0.204730994 container init 2f91966c113760e48477eb26f3ad82c8f614803119d6f19807065664fc1359cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackburn, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:56:31 compute-0 podman[278373]: 2025-10-11 04:56:31.283676929 +0000 UTC m=+0.227572457 container start 2f91966c113760e48477eb26f3ad82c8f614803119d6f19807065664fc1359cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackburn, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:56:31 compute-0 podman[278373]: 2025-10-11 04:56:31.288964321 +0000 UTC m=+0.232859899 container attach 2f91966c113760e48477eb26f3ad82c8f614803119d6f19807065664fc1359cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackburn, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:56:32 compute-0 elated_blackburn[278389]: {
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:     "0": [
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:         {
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "devices": [
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "/dev/loop3"
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             ],
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "lv_name": "ceph_lv0",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "lv_size": "21470642176",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "name": "ceph_lv0",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "tags": {
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.cluster_name": "ceph",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.crush_device_class": "",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.encrypted": "0",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.osd_id": "0",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.type": "block",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.vdo": "0"
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             },
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "type": "block",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "vg_name": "ceph_vg0"
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:         }
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:     ],
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:     "1": [
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:         {
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "devices": [
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "/dev/loop4"
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             ],
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "lv_name": "ceph_lv1",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "lv_size": "21470642176",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "name": "ceph_lv1",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "tags": {
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.cluster_name": "ceph",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.crush_device_class": "",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.encrypted": "0",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.osd_id": "1",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.type": "block",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.vdo": "0"
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             },
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "type": "block",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "vg_name": "ceph_vg1"
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:         }
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:     ],
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:     "2": [
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:         {
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "devices": [
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "/dev/loop5"
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             ],
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "lv_name": "ceph_lv2",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "lv_size": "21470642176",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "name": "ceph_lv2",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "tags": {
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.cluster_name": "ceph",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.crush_device_class": "",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.encrypted": "0",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.osd_id": "2",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.type": "block",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:                 "ceph.vdo": "0"
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             },
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "type": "block",
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:             "vg_name": "ceph_vg2"
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:         }
Oct 11 04:56:32 compute-0 elated_blackburn[278389]:     ]
Oct 11 04:56:32 compute-0 elated_blackburn[278389]: }
Oct 11 04:56:32 compute-0 systemd[1]: libpod-2f91966c113760e48477eb26f3ad82c8f614803119d6f19807065664fc1359cb.scope: Deactivated successfully.
Oct 11 04:56:32 compute-0 podman[278373]: 2025-10-11 04:56:32.03677845 +0000 UTC m=+0.980673978 container died 2f91966c113760e48477eb26f3ad82c8f614803119d6f19807065664fc1359cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:56:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd182325e5d637d383758a6ab8ea1e3b7ca2a8bae579eefed0f6f289633e0261-merged.mount: Deactivated successfully.
Oct 11 04:56:32 compute-0 podman[278373]: 2025-10-11 04:56:32.114127979 +0000 UTC m=+1.058023477 container remove 2f91966c113760e48477eb26f3ad82c8f614803119d6f19807065664fc1359cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:56:32 compute-0 systemd[1]: libpod-conmon-2f91966c113760e48477eb26f3ad82c8f614803119d6f19807065664fc1359cb.scope: Deactivated successfully.
Oct 11 04:56:32 compute-0 sudo[278269]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:32 compute-0 sudo[278411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:56:32 compute-0 sudo[278411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:32 compute-0 sudo[278411]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:32 compute-0 sudo[278436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:56:32 compute-0 sudo[278436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:32 compute-0 sudo[278436]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:32 compute-0 sudo[278461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:56:32 compute-0 sudo[278461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:32 compute-0 sudo[278461]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:32 compute-0 ceph-mon[74243]: pgmap v1068: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:32 compute-0 sudo[278486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:56:32 compute-0 sudo[278486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:33 compute-0 podman[278553]: 2025-10-11 04:56:33.048926686 +0000 UTC m=+0.059112333 container create 4342c3af24bf4e2a26c49db5858ab77a15dd7d1daa47a1df96def97f37316b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_yalow, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:56:33 compute-0 systemd[1]: Started libpod-conmon-4342c3af24bf4e2a26c49db5858ab77a15dd7d1daa47a1df96def97f37316b0d.scope.
Oct 11 04:56:33 compute-0 podman[278553]: 2025-10-11 04:56:33.027006956 +0000 UTC m=+0.037192613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:56:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:56:33 compute-0 podman[278553]: 2025-10-11 04:56:33.141850865 +0000 UTC m=+0.152036612 container init 4342c3af24bf4e2a26c49db5858ab77a15dd7d1daa47a1df96def97f37316b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:56:33 compute-0 podman[278553]: 2025-10-11 04:56:33.153048486 +0000 UTC m=+0.163234133 container start 4342c3af24bf4e2a26c49db5858ab77a15dd7d1daa47a1df96def97f37316b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:56:33 compute-0 podman[278553]: 2025-10-11 04:56:33.157276022 +0000 UTC m=+0.167461699 container attach 4342c3af24bf4e2a26c49db5858ab77a15dd7d1daa47a1df96def97f37316b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_yalow, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:56:33 compute-0 quirky_yalow[278569]: 167 167
Oct 11 04:56:33 compute-0 systemd[1]: libpod-4342c3af24bf4e2a26c49db5858ab77a15dd7d1daa47a1df96def97f37316b0d.scope: Deactivated successfully.
Oct 11 04:56:33 compute-0 podman[278553]: 2025-10-11 04:56:33.161520769 +0000 UTC m=+0.171706416 container died 4342c3af24bf4e2a26c49db5858ab77a15dd7d1daa47a1df96def97f37316b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_yalow, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:56:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-257fcfdc0312b228af92cb2c10c11fe9e75a7c0d66fb14f652a4062fc298c208-merged.mount: Deactivated successfully.
Oct 11 04:56:33 compute-0 podman[278553]: 2025-10-11 04:56:33.207966993 +0000 UTC m=+0.218152640 container remove 4342c3af24bf4e2a26c49db5858ab77a15dd7d1daa47a1df96def97f37316b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_yalow, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:56:33 compute-0 systemd[1]: libpod-conmon-4342c3af24bf4e2a26c49db5858ab77a15dd7d1daa47a1df96def97f37316b0d.scope: Deactivated successfully.
Oct 11 04:56:33 compute-0 podman[278594]: 2025-10-11 04:56:33.419532237 +0000 UTC m=+0.052126928 container create 43517221074b82ffc739bf4a497a1abc7c1deac1a931307329645281fe22ce45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:56:33 compute-0 systemd[1]: Started libpod-conmon-43517221074b82ffc739bf4a497a1abc7c1deac1a931307329645281fe22ce45.scope.
Oct 11 04:56:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:56:33 compute-0 podman[278594]: 2025-10-11 04:56:33.398925261 +0000 UTC m=+0.031519962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:56:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2e157099be224cbcfefafe83e244cc7b47ea631ce4d053968d01864eef0070/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:56:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2e157099be224cbcfefafe83e244cc7b47ea631ce4d053968d01864eef0070/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:56:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2e157099be224cbcfefafe83e244cc7b47ea631ce4d053968d01864eef0070/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:56:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2e157099be224cbcfefafe83e244cc7b47ea631ce4d053968d01864eef0070/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:56:33 compute-0 podman[278594]: 2025-10-11 04:56:33.514635612 +0000 UTC m=+0.147230343 container init 43517221074b82ffc739bf4a497a1abc7c1deac1a931307329645281fe22ce45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:56:33 compute-0 podman[278594]: 2025-10-11 04:56:33.522508549 +0000 UTC m=+0.155103260 container start 43517221074b82ffc739bf4a497a1abc7c1deac1a931307329645281fe22ce45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lumiere, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:56:33 compute-0 podman[278594]: 2025-10-11 04:56:33.526684094 +0000 UTC m=+0.159278825 container attach 43517221074b82ffc739bf4a497a1abc7c1deac1a931307329645281fe22ce45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:56:34 compute-0 ceph-mon[74243]: pgmap v1069: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]: {
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:         "osd_id": 1,
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:         "type": "bluestore"
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:     },
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:         "osd_id": 0,
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:         "type": "bluestore"
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:     },
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:         "osd_id": 2,
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:         "type": "bluestore"
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]:     }
Oct 11 04:56:34 compute-0 exciting_lumiere[278612]: }
Oct 11 04:56:34 compute-0 systemd[1]: libpod-43517221074b82ffc739bf4a497a1abc7c1deac1a931307329645281fe22ce45.scope: Deactivated successfully.
Oct 11 04:56:34 compute-0 systemd[1]: libpod-43517221074b82ffc739bf4a497a1abc7c1deac1a931307329645281fe22ce45.scope: Consumed 1.079s CPU time.
Oct 11 04:56:34 compute-0 podman[278645]: 2025-10-11 04:56:34.665396892 +0000 UTC m=+0.031270915 container died 43517221074b82ffc739bf4a497a1abc7c1deac1a931307329645281fe22ce45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lumiere, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:56:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec2e157099be224cbcfefafe83e244cc7b47ea631ce4d053968d01864eef0070-merged.mount: Deactivated successfully.
Oct 11 04:56:34 compute-0 podman[278645]: 2025-10-11 04:56:34.731689904 +0000 UTC m=+0.097563897 container remove 43517221074b82ffc739bf4a497a1abc7c1deac1a931307329645281fe22ce45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lumiere, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 04:56:34 compute-0 systemd[1]: libpod-conmon-43517221074b82ffc739bf4a497a1abc7c1deac1a931307329645281fe22ce45.scope: Deactivated successfully.
Oct 11 04:56:34 compute-0 sudo[278486]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:56:34 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:56:34 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:56:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:34 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:56:34 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev d77afd8b-d9df-420d-9eb5-0ea5085ce4be does not exist
Oct 11 04:56:34 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 6c0b58f0-e01a-4314-942a-460abf13599b does not exist
Oct 11 04:56:34 compute-0 sudo[278660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:56:34 compute-0 sudo[278660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:34 compute-0 sudo[278660]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:34 compute-0 sudo[278685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:56:34 compute-0 sudo[278685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:56:34 compute-0 sudo[278685]: pam_unix(sudo:session): session closed for user root
Oct 11 04:56:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:56:35 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:56:35 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:56:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:36 compute-0 ceph-mon[74243]: pgmap v1070: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:37 compute-0 podman[278710]: 2025-10-11 04:56:37.477946645 +0000 UTC m=+0.113640451 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 04:56:37 compute-0 ceph-mon[74243]: pgmap v1071: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:39 compute-0 ceph-mon[74243]: pgmap v1072: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:56:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:41 compute-0 ceph-mon[74243]: pgmap v1073: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:43 compute-0 ceph-mon[74243]: pgmap v1074: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:56:45 compute-0 ceph-mon[74243]: pgmap v1075: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:47 compute-0 ceph-mon[74243]: pgmap v1076: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:49 compute-0 ceph-mon[74243]: pgmap v1077: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:56:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:50 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:56:50 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7213 writes, 28K keys, 7213 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7213 writes, 1567 syncs, 4.60 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1470 writes, 4015 keys, 1470 commit groups, 1.0 writes per commit group, ingest: 2.26 MB, 0.00 MB/s
                                           Interval WAL: 1470 writes, 623 syncs, 2.36 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:56:51 compute-0 ceph-mon[74243]: pgmap v1078: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:53 compute-0 nova_compute[259400]: 2025-10-11 04:56:53.260 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:56:53 compute-0 ceph-mon[74243]: pgmap v1079: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:56:55 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:56:55 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 8982 writes, 34K keys, 8982 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8982 writes, 2130 syncs, 4.22 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1930 writes, 5011 keys, 1930 commit groups, 1.0 writes per commit group, ingest: 2.65 MB, 0.00 MB/s
                                           Interval WAL: 1930 writes, 829 syncs, 2.33 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:56:55 compute-0 ceph-mon[74243]: pgmap v1080: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:56:56
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['backups', 'default.rgw.control', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'default.rgw.log', 'vms', '.mgr']
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:56:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:57 compute-0 ceph-mon[74243]: pgmap v1081: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:56:59 compute-0 ceph-mon[74243]: pgmap v1082: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:57:00 compute-0 podman[278731]: 2025-10-11 04:57:00.448424192 +0000 UTC m=+0.092254039 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 04:57:00 compute-0 podman[278730]: 2025-10-11 04:57:00.448482673 +0000 UTC m=+0.095778986 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, io.buildah.version=1.41.3)
Oct 11 04:57:00 compute-0 podman[278729]: 2025-10-11 04:57:00.492597058 +0000 UTC m=+0.142217058 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct 11 04:57:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:01 compute-0 anacron[20485]: Job `cron.monthly' started
Oct 11 04:57:01 compute-0 anacron[20485]: Job `cron.monthly' terminated
Oct 11 04:57:01 compute-0 anacron[20485]: Normal exit (3 jobs run)
Oct 11 04:57:01 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:57:01 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7393 writes, 28K keys, 7393 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7393 writes, 1657 syncs, 4.46 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1837 writes, 4854 keys, 1837 commit groups, 1.0 writes per commit group, ingest: 2.52 MB, 0.00 MB/s
                                           Interval WAL: 1837 writes, 802 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:57:01 compute-0 ceph-mon[74243]: pgmap v1083: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:02 compute-0 ceph-mgr[74542]: [devicehealth INFO root] Check health
Oct 11 04:57:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:57:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1456176780' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:57:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:57:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1456176780' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:57:03 compute-0 ceph-mon[74243]: pgmap v1084: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/1456176780' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:57:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/1456176780' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:57:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:57:05 compute-0 ceph-mon[74243]: pgmap v1085: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:57:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:07 compute-0 ceph-mon[74243]: pgmap v1086: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:08 compute-0 podman[278797]: 2025-10-11 04:57:08.437084663 +0000 UTC m=+0.088818074 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent)
Oct 11 04:57:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:10 compute-0 ceph-mon[74243]: pgmap v1087: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:57:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:57:11.024 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:57:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:57:11.024 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:57:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:57:11.025 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:57:12 compute-0 ceph-mon[74243]: pgmap v1088: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:14 compute-0 ceph-mon[74243]: pgmap v1089: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:57:16 compute-0 ceph-mon[74243]: pgmap v1090: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:18 compute-0 ceph-mon[74243]: pgmap v1091: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:19 compute-0 nova_compute[259400]: 2025-10-11 04:57:19.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:57:19 compute-0 nova_compute[259400]: 2025-10-11 04:57:19.197 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:57:19 compute-0 nova_compute[259400]: 2025-10-11 04:57:19.197 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:57:19 compute-0 nova_compute[259400]: 2025-10-11 04:57:19.197 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:57:20 compute-0 ceph-mon[74243]: pgmap v1092: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:57:20 compute-0 nova_compute[259400]: 2025-10-11 04:57:20.198 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:57:20 compute-0 nova_compute[259400]: 2025-10-11 04:57:20.198 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:57:20 compute-0 nova_compute[259400]: 2025-10-11 04:57:20.199 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:57:20 compute-0 nova_compute[259400]: 2025-10-11 04:57:20.214 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:57:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:22 compute-0 ceph-mon[74243]: pgmap v1093: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:22 compute-0 nova_compute[259400]: 2025-10-11 04:57:22.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:57:22 compute-0 nova_compute[259400]: 2025-10-11 04:57:22.197 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:57:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:23 compute-0 nova_compute[259400]: 2025-10-11 04:57:23.192 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:57:23 compute-0 nova_compute[259400]: 2025-10-11 04:57:23.195 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:57:23 compute-0 nova_compute[259400]: 2025-10-11 04:57:23.266 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:57:23 compute-0 nova_compute[259400]: 2025-10-11 04:57:23.266 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:57:23 compute-0 nova_compute[259400]: 2025-10-11 04:57:23.267 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:57:23 compute-0 nova_compute[259400]: 2025-10-11 04:57:23.267 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:57:23 compute-0 nova_compute[259400]: 2025-10-11 04:57:23.268 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:57:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:57:23 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2010676797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:57:23 compute-0 nova_compute[259400]: 2025-10-11 04:57:23.743 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:57:23 compute-0 nova_compute[259400]: 2025-10-11 04:57:23.980 2 WARNING nova.virt.libvirt.driver [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:57:23 compute-0 nova_compute[259400]: 2025-10-11 04:57:23.983 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5025MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:57:23 compute-0 nova_compute[259400]: 2025-10-11 04:57:23.983 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:57:23 compute-0 nova_compute[259400]: 2025-10-11 04:57:23.984 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:57:24 compute-0 ceph-mon[74243]: pgmap v1094: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:24 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2010676797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:57:24 compute-0 nova_compute[259400]: 2025-10-11 04:57:24.222 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:57:24 compute-0 nova_compute[259400]: 2025-10-11 04:57:24.222 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:57:24 compute-0 nova_compute[259400]: 2025-10-11 04:57:24.249 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Refreshing inventories for resource provider 1f05a244-23b6-4149-9b5a-a525e5860d18 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 04:57:24 compute-0 nova_compute[259400]: 2025-10-11 04:57:24.279 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Updating ProviderTree inventory for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 04:57:24 compute-0 nova_compute[259400]: 2025-10-11 04:57:24.280 2 DEBUG nova.compute.provider_tree [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Updating inventory in ProviderTree for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 04:57:24 compute-0 nova_compute[259400]: 2025-10-11 04:57:24.300 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Refreshing aggregate associations for resource provider 1f05a244-23b6-4149-9b5a-a525e5860d18, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 04:57:24 compute-0 nova_compute[259400]: 2025-10-11 04:57:24.332 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Refreshing trait associations for resource provider 1f05a244-23b6-4149-9b5a-a525e5860d18, traits: COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_NODE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE42,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,HW_CPU_X86_AVX,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 04:57:24 compute-0 nova_compute[259400]: 2025-10-11 04:57:24.358 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:57:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:57:24 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/830768329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:57:24 compute-0 nova_compute[259400]: 2025-10-11 04:57:24.818 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:57:24 compute-0 nova_compute[259400]: 2025-10-11 04:57:24.827 2 DEBUG nova.compute.provider_tree [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f05a244-23b6-4149-9b5a-a525e5860d18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:57:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:24 compute-0 nova_compute[259400]: 2025-10-11 04:57:24.845 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:57:24 compute-0 nova_compute[259400]: 2025-10-11 04:57:24.847 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:57:24 compute-0 nova_compute[259400]: 2025-10-11 04:57:24.848 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:57:25 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/830768329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:57:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:57:25 compute-0 nova_compute[259400]: 2025-10-11 04:57:25.850 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:57:26 compute-0 ceph-mon[74243]: pgmap v1095: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:57:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:57:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:57:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:57:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:57:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:57:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:28 compute-0 ceph-mon[74243]: pgmap v1096: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:30 compute-0 ceph-mon[74243]: pgmap v1097: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:57:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:31 compute-0 podman[278863]: 2025-10-11 04:57:31.451082161 +0000 UTC m=+0.088690411 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 11 04:57:31 compute-0 podman[278864]: 2025-10-11 04:57:31.467655162 +0000 UTC m=+0.095659494 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Oct 11 04:57:31 compute-0 podman[278862]: 2025-10-11 04:57:31.497995874 +0000 UTC m=+0.140332892 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 04:57:32 compute-0 ceph-mon[74243]: pgmap v1098: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:34 compute-0 ceph-mon[74243]: pgmap v1099: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:35 compute-0 sudo[278925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:57:35 compute-0 sudo[278925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:35 compute-0 sudo[278925]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:35 compute-0 sudo[278950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:57:35 compute-0 sudo[278950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:35 compute-0 sudo[278950]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:57:35 compute-0 sudo[278975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:57:35 compute-0 sudo[278975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:35 compute-0 sudo[278975]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:35 compute-0 sudo[279000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 11 04:57:35 compute-0 sudo[279000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:35 compute-0 sudo[279000]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:57:35 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:57:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:57:35 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:57:35 compute-0 sudo[279047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:57:35 compute-0 sudo[279047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:35 compute-0 sudo[279047]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:35 compute-0 sudo[279072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:57:35 compute-0 sudo[279072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:35 compute-0 sudo[279072]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:35 compute-0 sudo[279097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:57:35 compute-0 sudo[279097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:35 compute-0 sudo[279097]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:35 compute-0 sudo[279122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:57:35 compute-0 sudo[279122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:36 compute-0 sudo[279122]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:36 compute-0 ceph-mon[74243]: pgmap v1100: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:36 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:57:36 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:57:36 compute-0 sudo[279178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:57:36 compute-0 sudo[279178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:36 compute-0 sudo[279178]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:36 compute-0 sudo[279203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:57:36 compute-0 sudo[279203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:36 compute-0 sudo[279203]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:36 compute-0 sudo[279228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:57:36 compute-0 sudo[279228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:36 compute-0 sudo[279228]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:36 compute-0 sudo[279253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- inventory --format=json-pretty --filter-for-batch
Oct 11 04:57:36 compute-0 sudo[279253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:37 compute-0 podman[279318]: 2025-10-11 04:57:37.365419399 +0000 UTC m=+0.065586598 container create 16de309e46346e70357ab26318d2706ac827371c9c5046ba1ece816f81046aa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:57:37 compute-0 systemd[1]: Started libpod-conmon-16de309e46346e70357ab26318d2706ac827371c9c5046ba1ece816f81046aa6.scope.
Oct 11 04:57:37 compute-0 podman[279318]: 2025-10-11 04:57:37.339015714 +0000 UTC m=+0.039182953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:57:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:57:37 compute-0 podman[279318]: 2025-10-11 04:57:37.467033099 +0000 UTC m=+0.167200348 container init 16de309e46346e70357ab26318d2706ac827371c9c5046ba1ece816f81046aa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct 11 04:57:37 compute-0 podman[279318]: 2025-10-11 04:57:37.478072093 +0000 UTC m=+0.178239262 container start 16de309e46346e70357ab26318d2706ac827371c9c5046ba1ece816f81046aa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldstine, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:57:37 compute-0 podman[279318]: 2025-10-11 04:57:37.481627011 +0000 UTC m=+0.181794260 container attach 16de309e46346e70357ab26318d2706ac827371c9c5046ba1ece816f81046aa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:57:37 compute-0 fervent_goldstine[279334]: 167 167
Oct 11 04:57:37 compute-0 systemd[1]: libpod-16de309e46346e70357ab26318d2706ac827371c9c5046ba1ece816f81046aa6.scope: Deactivated successfully.
Oct 11 04:57:37 compute-0 conmon[279334]: conmon 16de309e46346e70357a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-16de309e46346e70357ab26318d2706ac827371c9c5046ba1ece816f81046aa6.scope/container/memory.events
Oct 11 04:57:37 compute-0 podman[279318]: 2025-10-11 04:57:37.483709593 +0000 UTC m=+0.183876782 container died 16de309e46346e70357ab26318d2706ac827371c9c5046ba1ece816f81046aa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:57:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f748208463c9d6cebb2376a55f0f9a375e0f24bcc5bee62ec52039b98dd0734-merged.mount: Deactivated successfully.
Oct 11 04:57:37 compute-0 podman[279318]: 2025-10-11 04:57:37.534953724 +0000 UTC m=+0.235120873 container remove 16de309e46346e70357ab26318d2706ac827371c9c5046ba1ece816f81046aa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldstine, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:57:37 compute-0 systemd[1]: libpod-conmon-16de309e46346e70357ab26318d2706ac827371c9c5046ba1ece816f81046aa6.scope: Deactivated successfully.
Oct 11 04:57:37 compute-0 podman[279358]: 2025-10-11 04:57:37.761872383 +0000 UTC m=+0.053797286 container create d7dfcad7b2110ce704e7406f858e3b03e1315bbaf3b8674f56f8ac2c0f36913b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:57:37 compute-0 systemd[1]: Started libpod-conmon-d7dfcad7b2110ce704e7406f858e3b03e1315bbaf3b8674f56f8ac2c0f36913b.scope.
Oct 11 04:57:37 compute-0 podman[279358]: 2025-10-11 04:57:37.734944065 +0000 UTC m=+0.026869008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:57:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6495e6399307f7ee0ded540ecec8a89b15b7ce0e0fe2d898b22d4c64dc5de461/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6495e6399307f7ee0ded540ecec8a89b15b7ce0e0fe2d898b22d4c64dc5de461/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6495e6399307f7ee0ded540ecec8a89b15b7ce0e0fe2d898b22d4c64dc5de461/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6495e6399307f7ee0ded540ecec8a89b15b7ce0e0fe2d898b22d4c64dc5de461/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:57:37 compute-0 podman[279358]: 2025-10-11 04:57:37.876278151 +0000 UTC m=+0.168203094 container init d7dfcad7b2110ce704e7406f858e3b03e1315bbaf3b8674f56f8ac2c0f36913b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:57:37 compute-0 podman[279358]: 2025-10-11 04:57:37.887756415 +0000 UTC m=+0.179681308 container start d7dfcad7b2110ce704e7406f858e3b03e1315bbaf3b8674f56f8ac2c0f36913b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_robinson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:57:37 compute-0 podman[279358]: 2025-10-11 04:57:37.891593211 +0000 UTC m=+0.183518094 container attach d7dfcad7b2110ce704e7406f858e3b03e1315bbaf3b8674f56f8ac2c0f36913b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_robinson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:57:38 compute-0 ceph-mon[74243]: pgmap v1101: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:39 compute-0 priceless_robinson[279375]: [
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:     {
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:         "available": false,
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:         "ceph_device": false,
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:         "lsm_data": {},
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:         "lvs": [],
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:         "path": "/dev/sr0",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:         "rejected_reasons": [
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "Has a FileSystem",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "Insufficient space (<5GB)"
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:         ],
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:         "sys_api": {
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "actuators": null,
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "device_nodes": "sr0",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "devname": "sr0",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "human_readable_size": "482.00 KB",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "id_bus": "ata",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "model": "QEMU DVD-ROM",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "nr_requests": "2",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "parent": "/dev/sr0",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "partitions": {},
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "path": "/dev/sr0",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "removable": "1",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "rev": "2.5+",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "ro": "0",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "rotational": "0",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "sas_address": "",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "sas_device_handle": "",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "scheduler_mode": "mq-deadline",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "sectors": 0,
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "sectorsize": "2048",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "size": 493568.0,
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "support_discard": "2048",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "type": "disk",
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:             "vendor": "QEMU"
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:         }
Oct 11 04:57:39 compute-0 priceless_robinson[279375]:     }
Oct 11 04:57:39 compute-0 priceless_robinson[279375]: ]
Oct 11 04:57:39 compute-0 systemd[1]: libpod-d7dfcad7b2110ce704e7406f858e3b03e1315bbaf3b8674f56f8ac2c0f36913b.scope: Deactivated successfully.
Oct 11 04:57:39 compute-0 podman[279358]: 2025-10-11 04:57:39.412464576 +0000 UTC m=+1.704389469 container died d7dfcad7b2110ce704e7406f858e3b03e1315bbaf3b8674f56f8ac2c0f36913b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_robinson, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:57:39 compute-0 systemd[1]: libpod-d7dfcad7b2110ce704e7406f858e3b03e1315bbaf3b8674f56f8ac2c0f36913b.scope: Consumed 1.609s CPU time.
Oct 11 04:57:39 compute-0 podman[281155]: 2025-10-11 04:57:39.412093996 +0000 UTC m=+0.065839094 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:57:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-6495e6399307f7ee0ded540ecec8a89b15b7ce0e0fe2d898b22d4c64dc5de461-merged.mount: Deactivated successfully.
Oct 11 04:57:39 compute-0 podman[279358]: 2025-10-11 04:57:39.472917025 +0000 UTC m=+1.764841888 container remove d7dfcad7b2110ce704e7406f858e3b03e1315bbaf3b8674f56f8ac2c0f36913b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_robinson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:57:39 compute-0 systemd[1]: libpod-conmon-d7dfcad7b2110ce704e7406f858e3b03e1315bbaf3b8674f56f8ac2c0f36913b.scope: Deactivated successfully.
Oct 11 04:57:39 compute-0 sudo[279253]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:57:39 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:57:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:57:39 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:57:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:57:39 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:57:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:57:39 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:57:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:57:39 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:57:39 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev a21aea4a-ecae-45e9-b0c2-dc2751abf9c1 does not exist
Oct 11 04:57:39 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev b3d0a667-568f-4d2d-b791-887b5e8f7dff does not exist
Oct 11 04:57:39 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev d3f187f9-6e8d-4124-85cd-34ac1bd30561 does not exist
Oct 11 04:57:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:57:39 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:57:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:57:39 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:57:39 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:57:39 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:57:39 compute-0 sudo[281452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:57:39 compute-0 sudo[281452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:39 compute-0 sudo[281452]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:39 compute-0 sudo[281477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:57:39 compute-0 sudo[281477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:39 compute-0 sudo[281477]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:39 compute-0 sudo[281502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:57:39 compute-0 sudo[281502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:39 compute-0 sudo[281502]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:39 compute-0 sudo[281527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:57:39 compute-0 sudo[281527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:57:40 compute-0 podman[281592]: 2025-10-11 04:57:40.233539273 +0000 UTC m=+0.063613719 container create 50b006749b74599b2189f5fb8040ed7b49ca6e646d02a3e3659c9557956401c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:57:40 compute-0 systemd[1]: Started libpod-conmon-50b006749b74599b2189f5fb8040ed7b49ca6e646d02a3e3659c9557956401c9.scope.
Oct 11 04:57:40 compute-0 podman[281592]: 2025-10-11 04:57:40.207792644 +0000 UTC m=+0.037867140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:57:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:57:40 compute-0 podman[281592]: 2025-10-11 04:57:40.337302407 +0000 UTC m=+0.167376903 container init 50b006749b74599b2189f5fb8040ed7b49ca6e646d02a3e3659c9557956401c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lamport, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:57:40 compute-0 podman[281592]: 2025-10-11 04:57:40.344004573 +0000 UTC m=+0.174079009 container start 50b006749b74599b2189f5fb8040ed7b49ca6e646d02a3e3659c9557956401c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lamport, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:57:40 compute-0 dreamy_lamport[281608]: 167 167
Oct 11 04:57:40 compute-0 podman[281592]: 2025-10-11 04:57:40.348276169 +0000 UTC m=+0.178350655 container attach 50b006749b74599b2189f5fb8040ed7b49ca6e646d02a3e3659c9557956401c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lamport, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:57:40 compute-0 systemd[1]: libpod-50b006749b74599b2189f5fb8040ed7b49ca6e646d02a3e3659c9557956401c9.scope: Deactivated successfully.
Oct 11 04:57:40 compute-0 podman[281592]: 2025-10-11 04:57:40.349869518 +0000 UTC m=+0.179943954 container died 50b006749b74599b2189f5fb8040ed7b49ca6e646d02a3e3659c9557956401c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:57:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcec03c3d73fc11f780b6aed32cb4dfdf5e1b510a1a13933698e99c93dce46bd-merged.mount: Deactivated successfully.
Oct 11 04:57:40 compute-0 podman[281592]: 2025-10-11 04:57:40.391101941 +0000 UTC m=+0.221176347 container remove 50b006749b74599b2189f5fb8040ed7b49ca6e646d02a3e3659c9557956401c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lamport, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:57:40 compute-0 systemd[1]: libpod-conmon-50b006749b74599b2189f5fb8040ed7b49ca6e646d02a3e3659c9557956401c9.scope: Deactivated successfully.
Oct 11 04:57:40 compute-0 ceph-mon[74243]: pgmap v1102: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:40 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:57:40 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:57:40 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:57:40 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:57:40 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:57:40 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:57:40 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:57:40 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:57:40 compute-0 podman[281632]: 2025-10-11 04:57:40.631785441 +0000 UTC m=+0.066722086 container create 6d755fc61014d57904ce9390afac0c5bbbb4269d049aa0d4aa36342bc09c988a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:57:40 compute-0 systemd[1]: Started libpod-conmon-6d755fc61014d57904ce9390afac0c5bbbb4269d049aa0d4aa36342bc09c988a.scope.
Oct 11 04:57:40 compute-0 podman[281632]: 2025-10-11 04:57:40.603680594 +0000 UTC m=+0.038617299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:57:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432d12a7100835893126e70dda880efe336b0098f64449abd0f101be49a27c93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432d12a7100835893126e70dda880efe336b0098f64449abd0f101be49a27c93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432d12a7100835893126e70dda880efe336b0098f64449abd0f101be49a27c93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432d12a7100835893126e70dda880efe336b0098f64449abd0f101be49a27c93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432d12a7100835893126e70dda880efe336b0098f64449abd0f101be49a27c93/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:57:40 compute-0 podman[281632]: 2025-10-11 04:57:40.770097422 +0000 UTC m=+0.205034107 container init 6d755fc61014d57904ce9390afac0c5bbbb4269d049aa0d4aa36342bc09c988a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_cray, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:57:40 compute-0 podman[281632]: 2025-10-11 04:57:40.789786391 +0000 UTC m=+0.224723026 container start 6d755fc61014d57904ce9390afac0c5bbbb4269d049aa0d4aa36342bc09c988a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:57:40 compute-0 podman[281632]: 2025-10-11 04:57:40.794129308 +0000 UTC m=+0.229066003 container attach 6d755fc61014d57904ce9390afac0c5bbbb4269d049aa0d4aa36342bc09c988a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_cray, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:57:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:41 compute-0 trusting_cray[281649]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:57:41 compute-0 trusting_cray[281649]: --> relative data size: 1.0
Oct 11 04:57:41 compute-0 trusting_cray[281649]: --> All data devices are unavailable
Oct 11 04:57:41 compute-0 systemd[1]: libpod-6d755fc61014d57904ce9390afac0c5bbbb4269d049aa0d4aa36342bc09c988a.scope: Deactivated successfully.
Oct 11 04:57:41 compute-0 systemd[1]: libpod-6d755fc61014d57904ce9390afac0c5bbbb4269d049aa0d4aa36342bc09c988a.scope: Consumed 1.120s CPU time.
Oct 11 04:57:41 compute-0 podman[281632]: 2025-10-11 04:57:41.954797908 +0000 UTC m=+1.389734553 container died 6d755fc61014d57904ce9390afac0c5bbbb4269d049aa0d4aa36342bc09c988a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_cray, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:57:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-432d12a7100835893126e70dda880efe336b0098f64449abd0f101be49a27c93-merged.mount: Deactivated successfully.
Oct 11 04:57:42 compute-0 podman[281632]: 2025-10-11 04:57:42.023669357 +0000 UTC m=+1.458605972 container remove 6d755fc61014d57904ce9390afac0c5bbbb4269d049aa0d4aa36342bc09c988a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_cray, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 04:57:42 compute-0 systemd[1]: libpod-conmon-6d755fc61014d57904ce9390afac0c5bbbb4269d049aa0d4aa36342bc09c988a.scope: Deactivated successfully.
Oct 11 04:57:42 compute-0 sudo[281527]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:42 compute-0 sudo[281693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:57:42 compute-0 sudo[281693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:42 compute-0 sudo[281693]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:42 compute-0 sudo[281718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:57:42 compute-0 sudo[281718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:42 compute-0 sudo[281718]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:42 compute-0 sudo[281743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:57:42 compute-0 sudo[281743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:42 compute-0 sudo[281743]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:42 compute-0 sudo[281768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:57:42 compute-0 sudo[281768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:42 compute-0 ceph-mon[74243]: pgmap v1103: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:42 compute-0 podman[281835]: 2025-10-11 04:57:42.899477122 +0000 UTC m=+0.067257300 container create 1a044517e2ad7a54dbb88f50b2c6b483a82cbc438badd67d303c4dcda74a252c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:57:42 compute-0 systemd[1]: Started libpod-conmon-1a044517e2ad7a54dbb88f50b2c6b483a82cbc438badd67d303c4dcda74a252c.scope.
Oct 11 04:57:42 compute-0 podman[281835]: 2025-10-11 04:57:42.871313493 +0000 UTC m=+0.039093721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:57:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:57:43 compute-0 podman[281835]: 2025-10-11 04:57:43.008633749 +0000 UTC m=+0.176413977 container init 1a044517e2ad7a54dbb88f50b2c6b483a82cbc438badd67d303c4dcda74a252c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_faraday, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:57:43 compute-0 podman[281835]: 2025-10-11 04:57:43.01954082 +0000 UTC m=+0.187320998 container start 1a044517e2ad7a54dbb88f50b2c6b483a82cbc438badd67d303c4dcda74a252c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_faraday, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:57:43 compute-0 podman[281835]: 2025-10-11 04:57:43.02397418 +0000 UTC m=+0.191754418 container attach 1a044517e2ad7a54dbb88f50b2c6b483a82cbc438badd67d303c4dcda74a252c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:57:43 compute-0 lucid_faraday[281851]: 167 167
Oct 11 04:57:43 compute-0 systemd[1]: libpod-1a044517e2ad7a54dbb88f50b2c6b483a82cbc438badd67d303c4dcda74a252c.scope: Deactivated successfully.
Oct 11 04:57:43 compute-0 conmon[281851]: conmon 1a044517e2ad7a54dbb8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1a044517e2ad7a54dbb88f50b2c6b483a82cbc438badd67d303c4dcda74a252c.scope/container/memory.events
Oct 11 04:57:43 compute-0 podman[281835]: 2025-10-11 04:57:43.028479632 +0000 UTC m=+0.196259780 container died 1a044517e2ad7a54dbb88f50b2c6b483a82cbc438badd67d303c4dcda74a252c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_faraday, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:57:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0afbfe4b7cd16a1661d377705e75895d470ef3b7db97f65cbff5f50e8c7fb75-merged.mount: Deactivated successfully.
Oct 11 04:57:43 compute-0 podman[281835]: 2025-10-11 04:57:43.084469081 +0000 UTC m=+0.252249229 container remove 1a044517e2ad7a54dbb88f50b2c6b483a82cbc438badd67d303c4dcda74a252c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:57:43 compute-0 systemd[1]: libpod-conmon-1a044517e2ad7a54dbb88f50b2c6b483a82cbc438badd67d303c4dcda74a252c.scope: Deactivated successfully.
Oct 11 04:57:43 compute-0 podman[281875]: 2025-10-11 04:57:43.316930057 +0000 UTC m=+0.054149444 container create d0d720e289e11ff7cec13d2f7f88a3062669cacb5b4c80d4f4a9081fba40b00f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jackson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:57:43 compute-0 systemd[1]: Started libpod-conmon-d0d720e289e11ff7cec13d2f7f88a3062669cacb5b4c80d4f4a9081fba40b00f.scope.
Oct 11 04:57:43 compute-0 podman[281875]: 2025-10-11 04:57:43.302577881 +0000 UTC m=+0.039797288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:57:43 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:57:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a615702d8eebc7a888cbbce3aee485c03f362878ad2a90c9007a59f00a91dd88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:57:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a615702d8eebc7a888cbbce3aee485c03f362878ad2a90c9007a59f00a91dd88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:57:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a615702d8eebc7a888cbbce3aee485c03f362878ad2a90c9007a59f00a91dd88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:57:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a615702d8eebc7a888cbbce3aee485c03f362878ad2a90c9007a59f00a91dd88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:57:43 compute-0 podman[281875]: 2025-10-11 04:57:43.433927349 +0000 UTC m=+0.171146766 container init d0d720e289e11ff7cec13d2f7f88a3062669cacb5b4c80d4f4a9081fba40b00f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:57:43 compute-0 podman[281875]: 2025-10-11 04:57:43.439970419 +0000 UTC m=+0.177189816 container start d0d720e289e11ff7cec13d2f7f88a3062669cacb5b4c80d4f4a9081fba40b00f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:57:43 compute-0 podman[281875]: 2025-10-11 04:57:43.442758158 +0000 UTC m=+0.179977595 container attach d0d720e289e11ff7cec13d2f7f88a3062669cacb5b4c80d4f4a9081fba40b00f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jackson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]: {
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:     "0": [
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:         {
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "devices": [
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "/dev/loop3"
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             ],
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "lv_name": "ceph_lv0",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "lv_size": "21470642176",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "name": "ceph_lv0",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "tags": {
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.cluster_name": "ceph",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.crush_device_class": "",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.encrypted": "0",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.osd_id": "0",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.type": "block",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.vdo": "0"
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             },
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "type": "block",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "vg_name": "ceph_vg0"
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:         }
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:     ],
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:     "1": [
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:         {
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "devices": [
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "/dev/loop4"
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             ],
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "lv_name": "ceph_lv1",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "lv_size": "21470642176",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "name": "ceph_lv1",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "tags": {
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.cluster_name": "ceph",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.crush_device_class": "",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.encrypted": "0",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.osd_id": "1",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.type": "block",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.vdo": "0"
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             },
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "type": "block",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "vg_name": "ceph_vg1"
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:         }
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:     ],
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:     "2": [
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:         {
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "devices": [
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "/dev/loop5"
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             ],
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "lv_name": "ceph_lv2",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "lv_size": "21470642176",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "name": "ceph_lv2",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "tags": {
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.cluster_name": "ceph",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.crush_device_class": "",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.encrypted": "0",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.osd_id": "2",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.type": "block",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:                 "ceph.vdo": "0"
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             },
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "type": "block",
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:             "vg_name": "ceph_vg2"
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:         }
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]:     ]
Oct 11 04:57:44 compute-0 vigilant_jackson[281892]: }
Oct 11 04:57:44 compute-0 systemd[1]: libpod-d0d720e289e11ff7cec13d2f7f88a3062669cacb5b4c80d4f4a9081fba40b00f.scope: Deactivated successfully.
Oct 11 04:57:44 compute-0 podman[281875]: 2025-10-11 04:57:44.123968336 +0000 UTC m=+0.861187753 container died d0d720e289e11ff7cec13d2f7f88a3062669cacb5b4c80d4f4a9081fba40b00f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jackson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:57:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a615702d8eebc7a888cbbce3aee485c03f362878ad2a90c9007a59f00a91dd88-merged.mount: Deactivated successfully.
Oct 11 04:57:44 compute-0 podman[281875]: 2025-10-11 04:57:44.205814646 +0000 UTC m=+0.943034043 container remove d0d720e289e11ff7cec13d2f7f88a3062669cacb5b4c80d4f4a9081fba40b00f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:57:44 compute-0 systemd[1]: libpod-conmon-d0d720e289e11ff7cec13d2f7f88a3062669cacb5b4c80d4f4a9081fba40b00f.scope: Deactivated successfully.
Oct 11 04:57:44 compute-0 sudo[281768]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:44 compute-0 sudo[281913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:57:44 compute-0 sudo[281913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:44 compute-0 sudo[281913]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:44 compute-0 sudo[281938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:57:44 compute-0 sudo[281938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:44 compute-0 sudo[281938]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:44 compute-0 sudo[281963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:57:44 compute-0 sudo[281963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:44 compute-0 sudo[281963]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:44 compute-0 ceph-mon[74243]: pgmap v1104: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:44 compute-0 sudo[281988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:57:44 compute-0 sudo[281988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:45 compute-0 podman[282052]: 2025-10-11 04:57:45.099959356 +0000 UTC m=+0.072378607 container create 7ee2352b369f3589d1936bc6ad8f4b27ca143f4d6650a2fcd5721940e3265cbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:57:45 compute-0 systemd[1]: Started libpod-conmon-7ee2352b369f3589d1936bc6ad8f4b27ca143f4d6650a2fcd5721940e3265cbb.scope.
Oct 11 04:57:45 compute-0 podman[282052]: 2025-10-11 04:57:45.070200308 +0000 UTC m=+0.042619619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:57:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:57:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:57:45 compute-0 podman[282052]: 2025-10-11 04:57:45.21179576 +0000 UTC m=+0.184215071 container init 7ee2352b369f3589d1936bc6ad8f4b27ca143f4d6650a2fcd5721940e3265cbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:57:45 compute-0 podman[282052]: 2025-10-11 04:57:45.223575562 +0000 UTC m=+0.195994813 container start 7ee2352b369f3589d1936bc6ad8f4b27ca143f4d6650a2fcd5721940e3265cbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hertz, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Oct 11 04:57:45 compute-0 podman[282052]: 2025-10-11 04:57:45.227603622 +0000 UTC m=+0.200022933 container attach 7ee2352b369f3589d1936bc6ad8f4b27ca143f4d6650a2fcd5721940e3265cbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hertz, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:57:45 compute-0 quirky_hertz[282069]: 167 167
Oct 11 04:57:45 compute-0 systemd[1]: libpod-7ee2352b369f3589d1936bc6ad8f4b27ca143f4d6650a2fcd5721940e3265cbb.scope: Deactivated successfully.
Oct 11 04:57:45 compute-0 podman[282052]: 2025-10-11 04:57:45.232992876 +0000 UTC m=+0.205412127 container died 7ee2352b369f3589d1936bc6ad8f4b27ca143f4d6650a2fcd5721940e3265cbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hertz, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:57:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc206e003801776ffd1ad7ee755b100881c308ddd26359e42426759600e593f1-merged.mount: Deactivated successfully.
Oct 11 04:57:45 compute-0 podman[282052]: 2025-10-11 04:57:45.287721833 +0000 UTC m=+0.260141084 container remove 7ee2352b369f3589d1936bc6ad8f4b27ca143f4d6650a2fcd5721940e3265cbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Oct 11 04:57:45 compute-0 systemd[1]: libpod-conmon-7ee2352b369f3589d1936bc6ad8f4b27ca143f4d6650a2fcd5721940e3265cbb.scope: Deactivated successfully.
Oct 11 04:57:45 compute-0 podman[282093]: 2025-10-11 04:57:45.506740905 +0000 UTC m=+0.063780913 container create 986c22e6b0972a27e9c4bed373e8a7ad368d8aa8ae239141ad3623eabeac514a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haslett, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:57:45 compute-0 systemd[1]: Started libpod-conmon-986c22e6b0972a27e9c4bed373e8a7ad368d8aa8ae239141ad3623eabeac514a.scope.
Oct 11 04:57:45 compute-0 podman[282093]: 2025-10-11 04:57:45.481497489 +0000 UTC m=+0.038537577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:57:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea04f4fc61047d51b0cbb4e07765461f4634b6daef2b6e8ef5a26886fbb2c198/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea04f4fc61047d51b0cbb4e07765461f4634b6daef2b6e8ef5a26886fbb2c198/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea04f4fc61047d51b0cbb4e07765461f4634b6daef2b6e8ef5a26886fbb2c198/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea04f4fc61047d51b0cbb4e07765461f4634b6daef2b6e8ef5a26886fbb2c198/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:57:45 compute-0 podman[282093]: 2025-10-11 04:57:45.61576552 +0000 UTC m=+0.172805588 container init 986c22e6b0972a27e9c4bed373e8a7ad368d8aa8ae239141ad3623eabeac514a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haslett, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:57:45 compute-0 podman[282093]: 2025-10-11 04:57:45.631299935 +0000 UTC m=+0.188339933 container start 986c22e6b0972a27e9c4bed373e8a7ad368d8aa8ae239141ad3623eabeac514a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 04:57:45 compute-0 podman[282093]: 2025-10-11 04:57:45.634798932 +0000 UTC m=+0.191839010 container attach 986c22e6b0972a27e9c4bed373e8a7ad368d8aa8ae239141ad3623eabeac514a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:57:46 compute-0 ceph-mon[74243]: pgmap v1105: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:46 compute-0 youthful_haslett[282109]: {
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:         "osd_id": 1,
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:         "type": "bluestore"
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:     },
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:         "osd_id": 0,
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:         "type": "bluestore"
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:     },
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:         "osd_id": 2,
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:         "type": "bluestore"
Oct 11 04:57:46 compute-0 youthful_haslett[282109]:     }
Oct 11 04:57:46 compute-0 youthful_haslett[282109]: }
Oct 11 04:57:46 compute-0 systemd[1]: libpod-986c22e6b0972a27e9c4bed373e8a7ad368d8aa8ae239141ad3623eabeac514a.scope: Deactivated successfully.
Oct 11 04:57:46 compute-0 systemd[1]: libpod-986c22e6b0972a27e9c4bed373e8a7ad368d8aa8ae239141ad3623eabeac514a.scope: Consumed 1.100s CPU time.
Oct 11 04:57:46 compute-0 podman[282142]: 2025-10-11 04:57:46.777275321 +0000 UTC m=+0.032526827 container died 986c22e6b0972a27e9c4bed373e8a7ad368d8aa8ae239141ad3623eabeac514a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haslett, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:57:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea04f4fc61047d51b0cbb4e07765461f4634b6daef2b6e8ef5a26886fbb2c198-merged.mount: Deactivated successfully.
Oct 11 04:57:46 compute-0 podman[282142]: 2025-10-11 04:57:46.841767491 +0000 UTC m=+0.097019027 container remove 986c22e6b0972a27e9c4bed373e8a7ad368d8aa8ae239141ad3623eabeac514a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:57:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:46 compute-0 systemd[1]: libpod-conmon-986c22e6b0972a27e9c4bed373e8a7ad368d8aa8ae239141ad3623eabeac514a.scope: Deactivated successfully.
Oct 11 04:57:46 compute-0 sudo[281988]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:46 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:57:46 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:57:46 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:57:46 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:57:46 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 31a9d032-3cb5-426b-9081-dbbc4ba43384 does not exist
Oct 11 04:57:46 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 60a72925-e38f-49c0-9efe-6b833d3ae0a3 does not exist
Oct 11 04:57:47 compute-0 sudo[282157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:57:47 compute-0 sudo[282157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:47 compute-0 sudo[282157]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:47 compute-0 sudo[282182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:57:47 compute-0 sudo[282182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:57:47 compute-0 sudo[282182]: pam_unix(sudo:session): session closed for user root
Oct 11 04:57:47 compute-0 ceph-mon[74243]: pgmap v1106: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:47 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:57:47 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:57:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:49 compute-0 ceph-mon[74243]: pgmap v1107: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:57:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:51 compute-0 ceph-mon[74243]: pgmap v1108: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:53 compute-0 ceph-mon[74243]: pgmap v1109: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:57:55 compute-0 ceph-mon[74243]: pgmap v1110: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:57:56
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'images']
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:57:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct 11 04:57:57 compute-0 ceph-mon[74243]: pgmap v1111: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct 11 04:57:57 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct 11 04:57:57 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:57:57.988952) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:57:57 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct 11 04:57:57 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158677989010, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2147, "num_deletes": 251, "total_data_size": 3406277, "memory_usage": 3452128, "flush_reason": "Manual Compaction"}
Oct 11 04:57:57 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158678048870, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3318089, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21154, "largest_seqno": 23300, "table_properties": {"data_size": 3308288, "index_size": 6103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21284, "raw_average_key_size": 20, "raw_value_size": 3288194, "raw_average_value_size": 3186, "num_data_blocks": 275, "num_entries": 1032, "num_filter_entries": 1032, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760158464, "oldest_key_time": 1760158464, "file_creation_time": 1760158677, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 60306 microseconds, and 13293 cpu microseconds.
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:57:58.049255) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3318089 bytes OK
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:57:58.049496) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:57:58.051735) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:57:58.051761) EVENT_LOG_v1 {"time_micros": 1760158678051752, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:57:58.051784) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3397021, prev total WAL file size 3397021, number of live WAL files 2.
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:57:58.054720) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3240KB)], [50(7436KB)]
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158678054765, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10933071, "oldest_snapshot_seqno": -1}
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4907 keys, 9189676 bytes, temperature: kUnknown
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158678111648, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9189676, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9153871, "index_size": 22447, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12293, "raw_key_size": 120568, "raw_average_key_size": 24, "raw_value_size": 9062264, "raw_average_value_size": 1846, "num_data_blocks": 940, "num_entries": 4907, "num_filter_entries": 4907, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156706, "oldest_key_time": 0, "file_creation_time": 1760158678, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:57:58.111999) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9189676 bytes
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:57:58.113428) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.8 rd, 161.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.3 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5421, records dropped: 514 output_compression: NoCompression
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:57:58.113458) EVENT_LOG_v1 {"time_micros": 1760158678113444, "job": 26, "event": "compaction_finished", "compaction_time_micros": 57010, "compaction_time_cpu_micros": 38969, "output_level": 6, "num_output_files": 1, "total_output_size": 9189676, "num_input_records": 5421, "num_output_records": 4907, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158678114976, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158678117998, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:57:58.054610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:57:58.118088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:57:58.118094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:57:58.118097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:57:58.118099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:57:58 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:57:58.118101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:57:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Oct 11 04:57:59 compute-0 ceph-mon[74243]: pgmap v1112: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Oct 11 04:58:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:58:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Oct 11 04:58:02 compute-0 ceph-mon[74243]: pgmap v1113: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Oct 11 04:58:02 compute-0 podman[282209]: 2025-10-11 04:58:02.471289444 +0000 UTC m=+0.105538488 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:58:02 compute-0 podman[282210]: 2025-10-11 04:58:02.471273904 +0000 UTC m=+0.099220082 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=multipathd)
Oct 11 04:58:02 compute-0 podman[282208]: 2025-10-11 04:58:02.484774919 +0000 UTC m=+0.121764041 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:58:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Oct 11 04:58:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:58:03 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/254183207' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:58:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/254183207' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:58:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:58:03 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/254183207' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:58:04 compute-0 ceph-mon[74243]: pgmap v1114: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Oct 11 04:58:04 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/254183207' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:58:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:58:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:58:06 compute-0 ceph-mon[74243]: pgmap v1115: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:58:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:58:08 compute-0 ceph-mon[74243]: pgmap v1116: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:58:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Oct 11 04:58:10 compute-0 ceph-mon[74243]: pgmap v1117: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Oct 11 04:58:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:58:10 compute-0 podman[282270]: 2025-10-11 04:58:10.431845127 +0000 UTC m=+0.083000998 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 11 04:58:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 38 op/s
Oct 11 04:58:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:58:11.025 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:58:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:58:11.026 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:58:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:58:11.026 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:58:12 compute-0 ceph-mon[74243]: pgmap v1118: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 38 op/s
Oct 11 04:58:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 38 op/s
Oct 11 04:58:14 compute-0 ceph-mon[74243]: pgmap v1119: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 38 op/s
Oct 11 04:58:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Oct 11 04:58:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:58:16 compute-0 ceph-mon[74243]: pgmap v1120: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Oct 11 04:58:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:18 compute-0 ceph-mon[74243]: pgmap v1121: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:19 compute-0 nova_compute[259400]: 2025-10-11 04:58:19.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:58:19 compute-0 nova_compute[259400]: 2025-10-11 04:58:19.196 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:58:20 compute-0 ceph-mon[74243]: pgmap v1122: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:58:20 compute-0 nova_compute[259400]: 2025-10-11 04:58:20.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:58:20 compute-0 nova_compute[259400]: 2025-10-11 04:58:20.197 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:58:20 compute-0 nova_compute[259400]: 2025-10-11 04:58:20.197 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:58:20 compute-0 nova_compute[259400]: 2025-10-11 04:58:20.243 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:58:20 compute-0 nova_compute[259400]: 2025-10-11 04:58:20.245 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:58:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:21 compute-0 nova_compute[259400]: 2025-10-11 04:58:21.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:58:22 compute-0 ceph-mon[74243]: pgmap v1123: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:22 compute-0 nova_compute[259400]: 2025-10-11 04:58:22.198 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:58:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:23 compute-0 nova_compute[259400]: 2025-10-11 04:58:23.191 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:58:23 compute-0 nova_compute[259400]: 2025-10-11 04:58:23.211 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:58:23 compute-0 nova_compute[259400]: 2025-10-11 04:58:23.212 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:58:23 compute-0 nova_compute[259400]: 2025-10-11 04:58:23.245 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:58:23 compute-0 nova_compute[259400]: 2025-10-11 04:58:23.245 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:58:23 compute-0 nova_compute[259400]: 2025-10-11 04:58:23.245 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:58:23 compute-0 nova_compute[259400]: 2025-10-11 04:58:23.246 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:58:23 compute-0 nova_compute[259400]: 2025-10-11 04:58:23.246 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:58:23 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:58:23 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/613494999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:58:23 compute-0 nova_compute[259400]: 2025-10-11 04:58:23.685 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:58:23 compute-0 nova_compute[259400]: 2025-10-11 04:58:23.953 2 WARNING nova.virt.libvirt.driver [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:58:23 compute-0 nova_compute[259400]: 2025-10-11 04:58:23.955 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5022MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:58:23 compute-0 nova_compute[259400]: 2025-10-11 04:58:23.956 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:58:23 compute-0 nova_compute[259400]: 2025-10-11 04:58:23.957 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:58:24 compute-0 ceph-mon[74243]: pgmap v1124: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:24 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/613494999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:58:24 compute-0 nova_compute[259400]: 2025-10-11 04:58:24.245 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:58:24 compute-0 nova_compute[259400]: 2025-10-11 04:58:24.245 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:58:24 compute-0 nova_compute[259400]: 2025-10-11 04:58:24.266 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:58:24 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:58:24 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3928014597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:58:24 compute-0 nova_compute[259400]: 2025-10-11 04:58:24.711 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:58:24 compute-0 nova_compute[259400]: 2025-10-11 04:58:24.720 2 DEBUG nova.compute.provider_tree [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f05a244-23b6-4149-9b5a-a525e5860d18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:58:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:24 compute-0 nova_compute[259400]: 2025-10-11 04:58:24.903 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:58:24 compute-0 nova_compute[259400]: 2025-10-11 04:58:24.906 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:58:24 compute-0 nova_compute[259400]: 2025-10-11 04:58:24.907 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:58:25 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3928014597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:58:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:58:26 compute-0 ceph-mon[74243]: pgmap v1125: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:58:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:58:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:58:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:58:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:58:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:58:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:26 compute-0 nova_compute[259400]: 2025-10-11 04:58:26.892 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:58:26 compute-0 nova_compute[259400]: 2025-10-11 04:58:26.893 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:58:28 compute-0 ceph-mon[74243]: pgmap v1126: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:30 compute-0 ceph-mon[74243]: pgmap v1127: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:58:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:31 compute-0 nova_compute[259400]: 2025-10-11 04:58:31.106 2 DEBUG oslo_concurrency.processutils [None req-757ba4e3-9a3b-4ddf-91bd-402cf4696f4e c74c5010644948229e4e1212c770ef03 053a9da497c741e5a7b9900f0a35ff68 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:58:31 compute-0 nova_compute[259400]: 2025-10-11 04:58:31.140 2 DEBUG oslo_concurrency.processutils [None req-757ba4e3-9a3b-4ddf-91bd-402cf4696f4e c74c5010644948229e4e1212c770ef03 053a9da497c741e5a7b9900f0a35ff68 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:58:32 compute-0 ceph-mon[74243]: pgmap v1128: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:33 compute-0 podman[282336]: 2025-10-11 04:58:33.468250012 +0000 UTC m=+0.100960905 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct 11 04:58:33 compute-0 podman[282337]: 2025-10-11 04:58:33.472866617 +0000 UTC m=+0.100381901 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 04:58:33 compute-0 podman[282335]: 2025-10-11 04:58:33.485149471 +0000 UTC m=+0.133532283 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 04:58:34 compute-0 ceph-mon[74243]: pgmap v1129: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:58:36 compute-0 ceph-mon[74243]: pgmap v1130: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:38 compute-0 ceph-mon[74243]: pgmap v1131: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:38 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:58:38.991 161813 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:88:88', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '96:43:b2:79:d5:95'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:58:38 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:58:38.992 161813 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:58:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:58:40 compute-0 ceph-mon[74243]: pgmap v1132: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:41 compute-0 podman[282401]: 2025-10-11 04:58:41.441736357 +0000 UTC m=+0.089248825 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 04:58:42 compute-0 ceph-mon[74243]: pgmap v1133: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:44 compute-0 ceph-mon[74243]: pgmap v1134: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:44 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:58:44.994 161813 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2ff6420e-86e1-487c-bef9-adac80b75ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:58:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:58:46 compute-0 ceph-mon[74243]: pgmap v1135: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:47 compute-0 sudo[282422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:58:47 compute-0 sudo[282422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:58:47 compute-0 sudo[282422]: pam_unix(sudo:session): session closed for user root
Oct 11 04:58:47 compute-0 sudo[282447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:58:47 compute-0 sudo[282447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:58:47 compute-0 sudo[282447]: pam_unix(sudo:session): session closed for user root
Oct 11 04:58:47 compute-0 sudo[282472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:58:47 compute-0 sudo[282472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:58:47 compute-0 sudo[282472]: pam_unix(sudo:session): session closed for user root
Oct 11 04:58:47 compute-0 sudo[282497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:58:47 compute-0 sudo[282497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:58:48 compute-0 sudo[282497]: pam_unix(sudo:session): session closed for user root
Oct 11 04:58:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 04:58:48 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:58:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:58:48 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:58:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:58:48 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:58:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:58:48 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:58:48 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev c8d62bb6-cd06-4ce4-adc3-9a235b56acf7 does not exist
Oct 11 04:58:48 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev bdda904c-211c-47ff-87b7-46c0c7d10db9 does not exist
Oct 11 04:58:48 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 1b1360c2-b18a-4a14-814d-e681e90e7b9a does not exist
Oct 11 04:58:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:58:48 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:58:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:58:48 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:58:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:58:48 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:58:48 compute-0 sudo[282553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:58:48 compute-0 sudo[282553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:58:48 compute-0 sudo[282553]: pam_unix(sudo:session): session closed for user root
Oct 11 04:58:48 compute-0 ceph-mon[74243]: pgmap v1136: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:48 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:58:48 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:58:48 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:58:48 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:58:48 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:58:48 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:58:48 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:58:48 compute-0 sudo[282578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:58:48 compute-0 sudo[282578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:58:48 compute-0 sudo[282578]: pam_unix(sudo:session): session closed for user root
Oct 11 04:58:48 compute-0 sudo[282603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:58:48 compute-0 sudo[282603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:58:48 compute-0 sudo[282603]: pam_unix(sudo:session): session closed for user root
Oct 11 04:58:48 compute-0 sudo[282628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:58:48 compute-0 sudo[282628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:58:48 compute-0 podman[282693]: 2025-10-11 04:58:48.766784528 +0000 UTC m=+0.055467737 container create d3f257aa2d7145c9b5360213738472143924c5a052ed93e9a3d7a9489c285d69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:58:48 compute-0 systemd[1]: Started libpod-conmon-d3f257aa2d7145c9b5360213738472143924c5a052ed93e9a3d7a9489c285d69.scope.
Oct 11 04:58:48 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:58:48 compute-0 podman[282693]: 2025-10-11 04:58:48.749016997 +0000 UTC m=+0.037700186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:58:48 compute-0 podman[282693]: 2025-10-11 04:58:48.848811512 +0000 UTC m=+0.137494731 container init d3f257aa2d7145c9b5360213738472143924c5a052ed93e9a3d7a9489c285d69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:58:48 compute-0 podman[282693]: 2025-10-11 04:58:48.85678625 +0000 UTC m=+0.145469429 container start d3f257aa2d7145c9b5360213738472143924c5a052ed93e9a3d7a9489c285d69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:58:48 compute-0 podman[282693]: 2025-10-11 04:58:48.860163674 +0000 UTC m=+0.148846863 container attach d3f257aa2d7145c9b5360213738472143924c5a052ed93e9a3d7a9489c285d69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:58:48 compute-0 nifty_brattain[282710]: 167 167
Oct 11 04:58:48 compute-0 systemd[1]: libpod-d3f257aa2d7145c9b5360213738472143924c5a052ed93e9a3d7a9489c285d69.scope: Deactivated successfully.
Oct 11 04:58:48 compute-0 podman[282693]: 2025-10-11 04:58:48.866305966 +0000 UTC m=+0.154989145 container died d3f257aa2d7145c9b5360213738472143924c5a052ed93e9a3d7a9489c285d69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:58:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-00a50666a0c73dd5a99e2db76696aa26614a9198a357f21d292e9a5c5ccfc274-merged.mount: Deactivated successfully.
Oct 11 04:58:48 compute-0 podman[282693]: 2025-10-11 04:58:48.926661533 +0000 UTC m=+0.215344712 container remove d3f257aa2d7145c9b5360213738472143924c5a052ed93e9a3d7a9489c285d69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:58:48 compute-0 systemd[1]: libpod-conmon-d3f257aa2d7145c9b5360213738472143924c5a052ed93e9a3d7a9489c285d69.scope: Deactivated successfully.
Oct 11 04:58:49 compute-0 podman[282734]: 2025-10-11 04:58:49.127484195 +0000 UTC m=+0.054257617 container create f909ac3858efa0be7daefa625f57a6e2c6ca7617a12f3a8b76f164234e276c2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_thompson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:58:49 compute-0 systemd[1]: Started libpod-conmon-f909ac3858efa0be7daefa625f57a6e2c6ca7617a12f3a8b76f164234e276c2d.scope.
Oct 11 04:58:49 compute-0 podman[282734]: 2025-10-11 04:58:49.099713416 +0000 UTC m=+0.026486888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:58:49 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab7218095ff35d5a33b356e167e1fcee3ada225b7bcbe4a31f1d26ef024b3d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab7218095ff35d5a33b356e167e1fcee3ada225b7bcbe4a31f1d26ef024b3d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab7218095ff35d5a33b356e167e1fcee3ada225b7bcbe4a31f1d26ef024b3d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab7218095ff35d5a33b356e167e1fcee3ada225b7bcbe4a31f1d26ef024b3d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab7218095ff35d5a33b356e167e1fcee3ada225b7bcbe4a31f1d26ef024b3d3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:58:49 compute-0 podman[282734]: 2025-10-11 04:58:49.23372486 +0000 UTC m=+0.160498302 container init f909ac3858efa0be7daefa625f57a6e2c6ca7617a12f3a8b76f164234e276c2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_thompson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:58:49 compute-0 podman[282734]: 2025-10-11 04:58:49.246124848 +0000 UTC m=+0.172898280 container start f909ac3858efa0be7daefa625f57a6e2c6ca7617a12f3a8b76f164234e276c2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_thompson, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:58:49 compute-0 podman[282734]: 2025-10-11 04:58:49.250826364 +0000 UTC m=+0.177599796 container attach f909ac3858efa0be7daefa625f57a6e2c6ca7617a12f3a8b76f164234e276c2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:58:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:58:50 compute-0 ceph-mon[74243]: pgmap v1137: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:50 compute-0 cranky_thompson[282750]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:58:50 compute-0 cranky_thompson[282750]: --> relative data size: 1.0
Oct 11 04:58:50 compute-0 cranky_thompson[282750]: --> All data devices are unavailable
Oct 11 04:58:50 compute-0 systemd[1]: libpod-f909ac3858efa0be7daefa625f57a6e2c6ca7617a12f3a8b76f164234e276c2d.scope: Deactivated successfully.
Oct 11 04:58:50 compute-0 podman[282734]: 2025-10-11 04:58:50.400515392 +0000 UTC m=+1.327288814 container died f909ac3858efa0be7daefa625f57a6e2c6ca7617a12f3a8b76f164234e276c2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_thompson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 04:58:50 compute-0 systemd[1]: libpod-f909ac3858efa0be7daefa625f57a6e2c6ca7617a12f3a8b76f164234e276c2d.scope: Consumed 1.124s CPU time.
Oct 11 04:58:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ab7218095ff35d5a33b356e167e1fcee3ada225b7bcbe4a31f1d26ef024b3d3-merged.mount: Deactivated successfully.
Oct 11 04:58:50 compute-0 podman[282734]: 2025-10-11 04:58:50.485162862 +0000 UTC m=+1.411936294 container remove f909ac3858efa0be7daefa625f57a6e2c6ca7617a12f3a8b76f164234e276c2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_thompson, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 04:58:50 compute-0 systemd[1]: libpod-conmon-f909ac3858efa0be7daefa625f57a6e2c6ca7617a12f3a8b76f164234e276c2d.scope: Deactivated successfully.
Oct 11 04:58:50 compute-0 sudo[282628]: pam_unix(sudo:session): session closed for user root
Oct 11 04:58:50 compute-0 sudo[282791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:58:50 compute-0 sudo[282791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:58:50 compute-0 sudo[282791]: pam_unix(sudo:session): session closed for user root
Oct 11 04:58:50 compute-0 sudo[282816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:58:50 compute-0 sudo[282816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:58:50 compute-0 sudo[282816]: pam_unix(sudo:session): session closed for user root
Oct 11 04:58:50 compute-0 sudo[282841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:58:50 compute-0 sudo[282841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:58:50 compute-0 sudo[282841]: pam_unix(sudo:session): session closed for user root
Oct 11 04:58:50 compute-0 sudo[282866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 04:58:50 compute-0 sudo[282866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:58:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:51 compute-0 podman[282931]: 2025-10-11 04:58:51.314244428 +0000 UTC m=+0.062305317 container create 4bc35aabdf2bc6923b11a06574f25bb32a81bbffa8f0174946525928b5cb7b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:58:51 compute-0 systemd[1]: Started libpod-conmon-4bc35aabdf2bc6923b11a06574f25bb32a81bbffa8f0174946525928b5cb7b94.scope.
Oct 11 04:58:51 compute-0 podman[282931]: 2025-10-11 04:58:51.291280358 +0000 UTC m=+0.039341317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:58:51 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:58:51 compute-0 podman[282931]: 2025-10-11 04:58:51.417752895 +0000 UTC m=+0.165813864 container init 4bc35aabdf2bc6923b11a06574f25bb32a81bbffa8f0174946525928b5cb7b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Oct 11 04:58:51 compute-0 podman[282931]: 2025-10-11 04:58:51.42883514 +0000 UTC m=+0.176896059 container start 4bc35aabdf2bc6923b11a06574f25bb32a81bbffa8f0174946525928b5cb7b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:58:51 compute-0 podman[282931]: 2025-10-11 04:58:51.433395273 +0000 UTC m=+0.181456262 container attach 4bc35aabdf2bc6923b11a06574f25bb32a81bbffa8f0174946525928b5cb7b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 04:58:51 compute-0 wizardly_lovelace[282947]: 167 167
Oct 11 04:58:51 compute-0 systemd[1]: libpod-4bc35aabdf2bc6923b11a06574f25bb32a81bbffa8f0174946525928b5cb7b94.scope: Deactivated successfully.
Oct 11 04:58:51 compute-0 podman[282931]: 2025-10-11 04:58:51.436669004 +0000 UTC m=+0.184729923 container died 4bc35aabdf2bc6923b11a06574f25bb32a81bbffa8f0174946525928b5cb7b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:58:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9f5a15d10d1ea4bc7f76a5e25a31b1c1d44d5b52aff1a6bd74ab9b4a7c370e7-merged.mount: Deactivated successfully.
Oct 11 04:58:51 compute-0 podman[282931]: 2025-10-11 04:58:51.486041309 +0000 UTC m=+0.234102228 container remove 4bc35aabdf2bc6923b11a06574f25bb32a81bbffa8f0174946525928b5cb7b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:58:51 compute-0 systemd[1]: libpod-conmon-4bc35aabdf2bc6923b11a06574f25bb32a81bbffa8f0174946525928b5cb7b94.scope: Deactivated successfully.
Oct 11 04:58:51 compute-0 podman[282969]: 2025-10-11 04:58:51.734302707 +0000 UTC m=+0.069332360 container create c11c0dd2b736bbf1244fd657996b181a1910a7569e506f617d860bc11774b492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_villani, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:58:51 compute-0 systemd[1]: Started libpod-conmon-c11c0dd2b736bbf1244fd657996b181a1910a7569e506f617d860bc11774b492.scope.
Oct 11 04:58:51 compute-0 podman[282969]: 2025-10-11 04:58:51.705952694 +0000 UTC m=+0.040982597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:58:51 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:58:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a2665bd99457438816064ca0619e75f7a4c2d6bcdc6987df2bfe1e552c126e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:58:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a2665bd99457438816064ca0619e75f7a4c2d6bcdc6987df2bfe1e552c126e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:58:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a2665bd99457438816064ca0619e75f7a4c2d6bcdc6987df2bfe1e552c126e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:58:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a2665bd99457438816064ca0619e75f7a4c2d6bcdc6987df2bfe1e552c126e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:58:51 compute-0 podman[282969]: 2025-10-11 04:58:51.831046677 +0000 UTC m=+0.166076380 container init c11c0dd2b736bbf1244fd657996b181a1910a7569e506f617d860bc11774b492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:58:51 compute-0 podman[282969]: 2025-10-11 04:58:51.846549822 +0000 UTC m=+0.181579485 container start c11c0dd2b736bbf1244fd657996b181a1910a7569e506f617d860bc11774b492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_villani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:58:51 compute-0 podman[282969]: 2025-10-11 04:58:51.850392037 +0000 UTC m=+0.185421700 container attach c11c0dd2b736bbf1244fd657996b181a1910a7569e506f617d860bc11774b492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_villani, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:58:52 compute-0 ceph-mon[74243]: pgmap v1138: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:52 compute-0 wonderful_villani[282985]: {
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:     "0": [
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:         {
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "devices": [
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "/dev/loop3"
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             ],
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "lv_name": "ceph_lv0",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "lv_size": "21470642176",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "name": "ceph_lv0",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "tags": {
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.cluster_name": "ceph",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.crush_device_class": "",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.encrypted": "0",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.osd_id": "0",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.type": "block",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.vdo": "0"
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             },
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "type": "block",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "vg_name": "ceph_vg0"
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:         }
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:     ],
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:     "1": [
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:         {
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "devices": [
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "/dev/loop4"
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             ],
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "lv_name": "ceph_lv1",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "lv_size": "21470642176",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "name": "ceph_lv1",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "tags": {
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.cluster_name": "ceph",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.crush_device_class": "",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.encrypted": "0",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.osd_id": "1",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.type": "block",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.vdo": "0"
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             },
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "type": "block",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "vg_name": "ceph_vg1"
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:         }
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:     ],
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:     "2": [
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:         {
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "devices": [
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "/dev/loop5"
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             ],
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "lv_name": "ceph_lv2",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "lv_size": "21470642176",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "name": "ceph_lv2",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "tags": {
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.cluster_name": "ceph",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.crush_device_class": "",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.encrypted": "0",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.osd_id": "2",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.type": "block",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:                 "ceph.vdo": "0"
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             },
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "type": "block",
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:             "vg_name": "ceph_vg2"
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:         }
Oct 11 04:58:52 compute-0 wonderful_villani[282985]:     ]
Oct 11 04:58:52 compute-0 wonderful_villani[282985]: }
Oct 11 04:58:52 compute-0 systemd[1]: libpod-c11c0dd2b736bbf1244fd657996b181a1910a7569e506f617d860bc11774b492.scope: Deactivated successfully.
Oct 11 04:58:52 compute-0 podman[282969]: 2025-10-11 04:58:52.627538165 +0000 UTC m=+0.962567828 container died c11c0dd2b736bbf1244fd657996b181a1910a7569e506f617d860bc11774b492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:58:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a2665bd99457438816064ca0619e75f7a4c2d6bcdc6987df2bfe1e552c126e2-merged.mount: Deactivated successfully.
Oct 11 04:58:52 compute-0 podman[282969]: 2025-10-11 04:58:52.695530221 +0000 UTC m=+1.030559854 container remove c11c0dd2b736bbf1244fd657996b181a1910a7569e506f617d860bc11774b492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_villani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:58:52 compute-0 systemd[1]: libpod-conmon-c11c0dd2b736bbf1244fd657996b181a1910a7569e506f617d860bc11774b492.scope: Deactivated successfully.
Oct 11 04:58:52 compute-0 sudo[282866]: pam_unix(sudo:session): session closed for user root
Oct 11 04:58:52 compute-0 sudo[283006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:58:52 compute-0 sudo[283006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:58:52 compute-0 sudo[283006]: pam_unix(sudo:session): session closed for user root
Oct 11 04:58:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:52 compute-0 sudo[283031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:58:52 compute-0 sudo[283031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:58:52 compute-0 sudo[283031]: pam_unix(sudo:session): session closed for user root
Oct 11 04:58:53 compute-0 sudo[283056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:58:53 compute-0 sudo[283056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:58:53 compute-0 sudo[283056]: pam_unix(sudo:session): session closed for user root
Oct 11 04:58:53 compute-0 sudo[283081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 04:58:53 compute-0 sudo[283081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:58:53 compute-0 podman[283147]: 2025-10-11 04:58:53.623687184 +0000 UTC m=+0.060596135 container create 9f5bcfa86c6b23b68bf6dbdd194d7d83ddc9f930834f3d38f2d00734a3b1e214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:58:53 compute-0 systemd[1]: Started libpod-conmon-9f5bcfa86c6b23b68bf6dbdd194d7d83ddc9f930834f3d38f2d00734a3b1e214.scope.
Oct 11 04:58:53 compute-0 podman[283147]: 2025-10-11 04:58:53.602615931 +0000 UTC m=+0.039524962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:58:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:58:53 compute-0 podman[283147]: 2025-10-11 04:58:53.718492185 +0000 UTC m=+0.155401216 container init 9f5bcfa86c6b23b68bf6dbdd194d7d83ddc9f930834f3d38f2d00734a3b1e214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_jang, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:58:53 compute-0 podman[283147]: 2025-10-11 04:58:53.728951145 +0000 UTC m=+0.165860136 container start 9f5bcfa86c6b23b68bf6dbdd194d7d83ddc9f930834f3d38f2d00734a3b1e214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_jang, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:58:53 compute-0 podman[283147]: 2025-10-11 04:58:53.732901163 +0000 UTC m=+0.169810184 container attach 9f5bcfa86c6b23b68bf6dbdd194d7d83ddc9f930834f3d38f2d00734a3b1e214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_jang, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:58:53 compute-0 sleepy_jang[283164]: 167 167
Oct 11 04:58:53 compute-0 systemd[1]: libpod-9f5bcfa86c6b23b68bf6dbdd194d7d83ddc9f930834f3d38f2d00734a3b1e214.scope: Deactivated successfully.
Oct 11 04:58:53 compute-0 podman[283147]: 2025-10-11 04:58:53.736607105 +0000 UTC m=+0.173516076 container died 9f5bcfa86c6b23b68bf6dbdd194d7d83ddc9f930834f3d38f2d00734a3b1e214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_jang, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:58:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-64bb1efe9fcfd66b8d6343787a0b84347df8991f8ebf223c7959aaa00789de61-merged.mount: Deactivated successfully.
Oct 11 04:58:53 compute-0 podman[283147]: 2025-10-11 04:58:53.790739117 +0000 UTC m=+0.227648098 container remove 9f5bcfa86c6b23b68bf6dbdd194d7d83ddc9f930834f3d38f2d00734a3b1e214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:58:53 compute-0 systemd[1]: libpod-conmon-9f5bcfa86c6b23b68bf6dbdd194d7d83ddc9f930834f3d38f2d00734a3b1e214.scope: Deactivated successfully.
Oct 11 04:58:54 compute-0 podman[283188]: 2025-10-11 04:58:54.037190361 +0000 UTC m=+0.055587540 container create f85e75459d969a6b72de9d83bae7f6ce82a722c0e9ed1f5ec98a91b1d4eaa4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_robinson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:58:54 compute-0 systemd[1]: Started libpod-conmon-f85e75459d969a6b72de9d83bae7f6ce82a722c0e9ed1f5ec98a91b1d4eaa4be.scope.
Oct 11 04:58:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:58:54 compute-0 podman[283188]: 2025-10-11 04:58:54.018245181 +0000 UTC m=+0.036642360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:58:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d8aba0a85a92e0220db1d00a13546d60a4f95ea83c2411cabdd61bb48492f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:58:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d8aba0a85a92e0220db1d00a13546d60a4f95ea83c2411cabdd61bb48492f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:58:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d8aba0a85a92e0220db1d00a13546d60a4f95ea83c2411cabdd61bb48492f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:58:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d8aba0a85a92e0220db1d00a13546d60a4f95ea83c2411cabdd61bb48492f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:58:54 compute-0 podman[283188]: 2025-10-11 04:58:54.134226858 +0000 UTC m=+0.152624027 container init f85e75459d969a6b72de9d83bae7f6ce82a722c0e9ed1f5ec98a91b1d4eaa4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_robinson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:58:54 compute-0 podman[283188]: 2025-10-11 04:58:54.139774035 +0000 UTC m=+0.158171234 container start f85e75459d969a6b72de9d83bae7f6ce82a722c0e9ed1f5ec98a91b1d4eaa4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_robinson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:58:54 compute-0 podman[283188]: 2025-10-11 04:58:54.143779495 +0000 UTC m=+0.162176654 container attach f85e75459d969a6b72de9d83bae7f6ce82a722c0e9ed1f5ec98a91b1d4eaa4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 04:58:54 compute-0 ceph-mon[74243]: pgmap v1139: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:58:55 compute-0 infallible_robinson[283205]: {
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:         "osd_id": 1,
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:         "type": "bluestore"
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:     },
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:         "osd_id": 0,
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:         "type": "bluestore"
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:     },
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:         "osd_id": 2,
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:         "type": "bluestore"
Oct 11 04:58:55 compute-0 infallible_robinson[283205]:     }
Oct 11 04:58:55 compute-0 infallible_robinson[283205]: }
Oct 11 04:58:55 compute-0 systemd[1]: libpod-f85e75459d969a6b72de9d83bae7f6ce82a722c0e9ed1f5ec98a91b1d4eaa4be.scope: Deactivated successfully.
Oct 11 04:58:55 compute-0 podman[283188]: 2025-10-11 04:58:55.264127325 +0000 UTC m=+1.282524494 container died f85e75459d969a6b72de9d83bae7f6ce82a722c0e9ed1f5ec98a91b1d4eaa4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:58:55 compute-0 systemd[1]: libpod-f85e75459d969a6b72de9d83bae7f6ce82a722c0e9ed1f5ec98a91b1d4eaa4be.scope: Consumed 1.130s CPU time.
Oct 11 04:58:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3d8aba0a85a92e0220db1d00a13546d60a4f95ea83c2411cabdd61bb48492f0-merged.mount: Deactivated successfully.
Oct 11 04:58:55 compute-0 podman[283188]: 2025-10-11 04:58:55.326581765 +0000 UTC m=+1.344978924 container remove f85e75459d969a6b72de9d83bae7f6ce82a722c0e9ed1f5ec98a91b1d4eaa4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:58:55 compute-0 systemd[1]: libpod-conmon-f85e75459d969a6b72de9d83bae7f6ce82a722c0e9ed1f5ec98a91b1d4eaa4be.scope: Deactivated successfully.
Oct 11 04:58:55 compute-0 sudo[283081]: pam_unix(sudo:session): session closed for user root
Oct 11 04:58:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:58:55 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:58:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:58:55 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:58:55 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 4ddfffe5-271f-449c-9064-d089db5d7bc1 does not exist
Oct 11 04:58:55 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 9455ec6f-8a23-4eb6-8a1c-f4aeff9744f1 does not exist
Oct 11 04:58:55 compute-0 sudo[283250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:58:55 compute-0 sudo[283250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:58:55 compute-0 sudo[283250]: pam_unix(sudo:session): session closed for user root
Oct 11 04:58:55 compute-0 sudo[283275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:58:55 compute-0 sudo[283275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:58:55 compute-0 sudo[283275]: pam_unix(sudo:session): session closed for user root
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:58:56
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'images']
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:58:56 compute-0 ceph-mon[74243]: pgmap v1140: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:56 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:58:56 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:58:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:58 compute-0 ceph-mon[74243]: pgmap v1141: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:58:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:59:00 compute-0 ceph-mon[74243]: pgmap v1142: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:02 compute-0 ceph-mon[74243]: pgmap v1143: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:59:03 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/56638160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:59:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:59:03 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/56638160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:59:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/56638160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:59:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/56638160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:59:04 compute-0 ceph-mon[74243]: pgmap v1144: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:04 compute-0 podman[283301]: 2025-10-11 04:59:04.426789078 +0000 UTC m=+0.073095584 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:59:04 compute-0 podman[283302]: 2025-10-11 04:59:04.457634033 +0000 UTC m=+0.105947189 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:59:04 compute-0 podman[283300]: 2025-10-11 04:59:04.459699804 +0000 UTC m=+0.106430141 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:59:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:59:06 compute-0 ceph-mon[74243]: pgmap v1145: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:08 compute-0 ceph-mon[74243]: pgmap v1146: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:59:10 compute-0 ceph-mon[74243]: pgmap v1147: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:59:11.026 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:59:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:59:11.028 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:59:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 04:59:11.028 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:59:12 compute-0 podman[283363]: 2025-10-11 04:59:12.432452041 +0000 UTC m=+0.079535674 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:59:12 compute-0 ceph-mon[74243]: pgmap v1148: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:14 compute-0 ceph-mon[74243]: pgmap v1149: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:59:16 compute-0 ceph-mon[74243]: pgmap v1150: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:18 compute-0 ceph-mon[74243]: pgmap v1151: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:59:20 compute-0 nova_compute[259400]: 2025-10-11 04:59:20.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:59:20 compute-0 nova_compute[259400]: 2025-10-11 04:59:20.197 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:59:20 compute-0 nova_compute[259400]: 2025-10-11 04:59:20.197 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:59:20 compute-0 ceph-mon[74243]: pgmap v1152: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:21 compute-0 nova_compute[259400]: 2025-10-11 04:59:21.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:59:22 compute-0 nova_compute[259400]: 2025-10-11 04:59:22.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:59:22 compute-0 nova_compute[259400]: 2025-10-11 04:59:22.197 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:59:22 compute-0 nova_compute[259400]: 2025-10-11 04:59:22.197 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:59:22 compute-0 nova_compute[259400]: 2025-10-11 04:59:22.214 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:59:22 compute-0 ceph-mon[74243]: pgmap v1153: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:23 compute-0 nova_compute[259400]: 2025-10-11 04:59:23.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:59:23 compute-0 nova_compute[259400]: 2025-10-11 04:59:23.197 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:59:24 compute-0 ceph-mon[74243]: pgmap v1154: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:59:25 compute-0 nova_compute[259400]: 2025-10-11 04:59:25.192 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:59:25 compute-0 nova_compute[259400]: 2025-10-11 04:59:25.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:59:25 compute-0 nova_compute[259400]: 2025-10-11 04:59:25.231 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:59:25 compute-0 nova_compute[259400]: 2025-10-11 04:59:25.231 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:59:25 compute-0 nova_compute[259400]: 2025-10-11 04:59:25.232 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:59:25 compute-0 nova_compute[259400]: 2025-10-11 04:59:25.232 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:59:25 compute-0 nova_compute[259400]: 2025-10-11 04:59:25.233 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:59:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:59:25 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2194518173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:59:25 compute-0 nova_compute[259400]: 2025-10-11 04:59:25.711 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:59:25 compute-0 nova_compute[259400]: 2025-10-11 04:59:25.997 2 WARNING nova.virt.libvirt.driver [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:59:26 compute-0 nova_compute[259400]: 2025-10-11 04:59:25.999 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5009MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:59:26 compute-0 nova_compute[259400]: 2025-10-11 04:59:26.000 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:59:26 compute-0 nova_compute[259400]: 2025-10-11 04:59:26.000 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:59:26 compute-0 nova_compute[259400]: 2025-10-11 04:59:26.086 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:59:26 compute-0 nova_compute[259400]: 2025-10-11 04:59:26.087 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:59:26 compute-0 nova_compute[259400]: 2025-10-11 04:59:26.109 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:59:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:59:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:59:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:59:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:59:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:59:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:59:26 compute-0 ceph-mon[74243]: pgmap v1155: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:26 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2194518173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:59:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:59:26 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1854190754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:59:26 compute-0 nova_compute[259400]: 2025-10-11 04:59:26.600 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:59:26 compute-0 nova_compute[259400]: 2025-10-11 04:59:26.608 2 DEBUG nova.compute.provider_tree [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f05a244-23b6-4149-9b5a-a525e5860d18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:59:26 compute-0 nova_compute[259400]: 2025-10-11 04:59:26.631 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:59:26 compute-0 nova_compute[259400]: 2025-10-11 04:59:26.633 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:59:26 compute-0 nova_compute[259400]: 2025-10-11 04:59:26.634 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:59:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:27 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1854190754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:59:27 compute-0 nova_compute[259400]: 2025-10-11 04:59:27.635 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:59:28 compute-0 ceph-mon[74243]: pgmap v1156: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:59:30 compute-0 ceph-mon[74243]: pgmap v1157: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:32 compute-0 ceph-mon[74243]: pgmap v1158: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:34 compute-0 ceph-mon[74243]: pgmap v1159: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:59:35.196415) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158775196485, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 996, "num_deletes": 255, "total_data_size": 1419287, "memory_usage": 1448288, "flush_reason": "Manual Compaction"}
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158775210067, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1406265, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23301, "largest_seqno": 24296, "table_properties": {"data_size": 1401376, "index_size": 2413, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10079, "raw_average_key_size": 18, "raw_value_size": 1391603, "raw_average_value_size": 2577, "num_data_blocks": 110, "num_entries": 540, "num_filter_entries": 540, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760158678, "oldest_key_time": 1760158678, "file_creation_time": 1760158775, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 13724 microseconds, and 7866 cpu microseconds.
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:59:35.210145) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1406265 bytes OK
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:59:35.210173) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:59:35.212975) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:59:35.213015) EVENT_LOG_v1 {"time_micros": 1760158775213006, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:59:35.213040) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1414556, prev total WAL file size 1414556, number of live WAL files 2.
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:59:35.214457) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353034' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1373KB)], [53(8974KB)]
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158775215031, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10595941, "oldest_snapshot_seqno": -1}
Oct 11 04:59:35 compute-0 podman[283428]: 2025-10-11 04:59:35.284115543 +0000 UTC m=+0.060552843 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4925 keys, 10500941 bytes, temperature: kUnknown
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158775289734, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 10500941, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10462992, "index_size": 24527, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12357, "raw_key_size": 122047, "raw_average_key_size": 24, "raw_value_size": 10369097, "raw_average_value_size": 2105, "num_data_blocks": 1029, "num_entries": 4925, "num_filter_entries": 4925, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156706, "oldest_key_time": 0, "file_creation_time": 1760158775, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a520806-b9ee-4391-a2e1-17ca2b78e946", "db_session_id": "LQX577T3ABHDCRGGY4EM", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:59:35.290319) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 10500941 bytes
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:59:35.292274) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.5 rd, 140.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.8 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(15.0) write-amplify(7.5) OK, records in: 5447, records dropped: 522 output_compression: NoCompression
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:59:35.292309) EVENT_LOG_v1 {"time_micros": 1760158775292292, "job": 28, "event": "compaction_finished", "compaction_time_micros": 74890, "compaction_time_cpu_micros": 46002, "output_level": 6, "num_output_files": 1, "total_output_size": 10500941, "num_input_records": 5447, "num_output_records": 4925, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158775293655, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760158775297258, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:59:35.214051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:59:35.298222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:59:35.298230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:59:35.298232) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:59:35.298233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:59:35 compute-0 ceph-mon[74243]: rocksdb: (Original Log Time 2025/10/11-04:59:35.298235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:59:35 compute-0 podman[283427]: 2025-10-11 04:59:35.315304847 +0000 UTC m=+0.096005963 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251009, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:59:35 compute-0 podman[283426]: 2025-10-11 04:59:35.326115025 +0000 UTC m=+0.101863328 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:59:36 compute-0 ceph-mon[74243]: pgmap v1160: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:38 compute-0 ceph-mon[74243]: pgmap v1161: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:59:40 compute-0 ceph-mon[74243]: pgmap v1162: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:42 compute-0 ceph-mon[74243]: pgmap v1163: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:43 compute-0 podman[283489]: 2025-10-11 04:59:43.432133977 +0000 UTC m=+0.083906223 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 11 04:59:44 compute-0 ceph-mon[74243]: pgmap v1164: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:59:46 compute-0 ceph-mon[74243]: pgmap v1165: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:48 compute-0 ceph-mon[74243]: pgmap v1166: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:59:50 compute-0 ceph-mon[74243]: pgmap v1167: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:52 compute-0 ceph-mon[74243]: pgmap v1168: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:54 compute-0 ceph-mon[74243]: pgmap v1169: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:59:55 compute-0 sudo[283510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:59:55 compute-0 sudo[283510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:59:55 compute-0 sudo[283510]: pam_unix(sudo:session): session closed for user root
Oct 11 04:59:55 compute-0 sudo[283535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:59:55 compute-0 sudo[283535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:59:55 compute-0 sudo[283535]: pam_unix(sudo:session): session closed for user root
Oct 11 04:59:55 compute-0 sudo[283560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:59:55 compute-0 sudo[283560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:59:55 compute-0 sudo[283560]: pam_unix(sudo:session): session closed for user root
Oct 11 04:59:55 compute-0 sudo[283585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 04:59:55 compute-0 sudo[283585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_04:59:56
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'volumes']
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:59:56 compute-0 ceph-mon[74243]: pgmap v1170: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:56 compute-0 podman[283684]: 2025-10-11 04:59:56.68715952 +0000 UTC m=+0.098925325 container exec 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:59:56 compute-0 podman[283684]: 2025-10-11 04:59:56.824295351 +0000 UTC m=+0.236061066 container exec_died 9e6c9a4e99dcfe74f2acde1b3251aae6bd833ab8c3a16ed08813556f7d4dbff5 (image=quay.io/ceph/ceph:v18, name=ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mon-compute-0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 04:59:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:57 compute-0 sudo[283585]: pam_unix(sudo:session): session closed for user root
Oct 11 04:59:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:59:57 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:59:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:59:57 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:59:57 compute-0 sudo[283842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:59:57 compute-0 sudo[283842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:59:57 compute-0 sudo[283842]: pam_unix(sudo:session): session closed for user root
Oct 11 04:59:57 compute-0 sudo[283867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:59:57 compute-0 sudo[283867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:59:57 compute-0 sudo[283867]: pam_unix(sudo:session): session closed for user root
Oct 11 04:59:57 compute-0 sudo[283892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:59:57 compute-0 sudo[283892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:59:57 compute-0 sudo[283892]: pam_unix(sudo:session): session closed for user root
Oct 11 04:59:57 compute-0 sudo[283917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:59:57 compute-0 sudo[283917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:59:58 compute-0 sudo[283917]: pam_unix(sudo:session): session closed for user root
Oct 11 04:59:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:59:58 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:59:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:59:58 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:59:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:59:58 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:59:58 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev cdc162f0-77f3-4306-a12a-8ca933bc8faa does not exist
Oct 11 04:59:58 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 17711262-8b45-4503-b08c-f9dee99a6e50 does not exist
Oct 11 04:59:58 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev d92eec90-87ac-4f34-8105-0756af089c2d does not exist
Oct 11 04:59:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:59:58 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:59:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:59:58 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:59:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:59:58 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:59:58 compute-0 ceph-mon[74243]: pgmap v1171: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:59:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:59:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:59:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:59:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 04:59:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:59:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:59:58 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:59:58 compute-0 sudo[283974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:59:58 compute-0 sudo[283974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:59:58 compute-0 sudo[283974]: pam_unix(sudo:session): session closed for user root
Oct 11 04:59:58 compute-0 sudo[283999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:59:58 compute-0 sudo[283999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:59:58 compute-0 sudo[283999]: pam_unix(sudo:session): session closed for user root
Oct 11 04:59:58 compute-0 sudo[284024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:59:58 compute-0 sudo[284024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:59:58 compute-0 sudo[284024]: pam_unix(sudo:session): session closed for user root
Oct 11 04:59:58 compute-0 sudo[284049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:59:58 compute-0 sudo[284049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:59:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:59:59 compute-0 podman[284117]: 2025-10-11 04:59:59.267661169 +0000 UTC m=+0.057535738 container create 140f9525d89c3a963169c961a3d36d1d3fdc5a6b04111cd52822e7124de015df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:59:59 compute-0 systemd[1]: Started libpod-conmon-140f9525d89c3a963169c961a3d36d1d3fdc5a6b04111cd52822e7124de015df.scope.
Oct 11 04:59:59 compute-0 podman[284117]: 2025-10-11 04:59:59.246557926 +0000 UTC m=+0.036432505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:59:59 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:59:59 compute-0 podman[284117]: 2025-10-11 04:59:59.375496384 +0000 UTC m=+0.165370993 container init 140f9525d89c3a963169c961a3d36d1d3fdc5a6b04111cd52822e7124de015df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:59:59 compute-0 podman[284117]: 2025-10-11 04:59:59.386775344 +0000 UTC m=+0.176649923 container start 140f9525d89c3a963169c961a3d36d1d3fdc5a6b04111cd52822e7124de015df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:59:59 compute-0 podman[284117]: 2025-10-11 04:59:59.391671485 +0000 UTC m=+0.181546114 container attach 140f9525d89c3a963169c961a3d36d1d3fdc5a6b04111cd52822e7124de015df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:59:59 compute-0 goofy_cray[284133]: 167 167
Oct 11 04:59:59 compute-0 systemd[1]: libpod-140f9525d89c3a963169c961a3d36d1d3fdc5a6b04111cd52822e7124de015df.scope: Deactivated successfully.
Oct 11 04:59:59 compute-0 podman[284117]: 2025-10-11 04:59:59.393562592 +0000 UTC m=+0.183437191 container died 140f9525d89c3a963169c961a3d36d1d3fdc5a6b04111cd52822e7124de015df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:59:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-19ed6b9492517bda7be4e4735df70d647e322a75bd280758185dcfd4f7a528fc-merged.mount: Deactivated successfully.
Oct 11 04:59:59 compute-0 podman[284117]: 2025-10-11 04:59:59.449779246 +0000 UTC m=+0.239653826 container remove 140f9525d89c3a963169c961a3d36d1d3fdc5a6b04111cd52822e7124de015df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cray, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:59:59 compute-0 systemd[1]: libpod-conmon-140f9525d89c3a963169c961a3d36d1d3fdc5a6b04111cd52822e7124de015df.scope: Deactivated successfully.
Oct 11 04:59:59 compute-0 podman[284156]: 2025-10-11 04:59:59.677115506 +0000 UTC m=+0.055594310 container create 5c2fd0f787b1f21544fed90cb79f6e8b8ee84f41a9d95bbb5dd363bf679f10b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_merkle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:59:59 compute-0 systemd[1]: Started libpod-conmon-5c2fd0f787b1f21544fed90cb79f6e8b8ee84f41a9d95bbb5dd363bf679f10b8.scope.
Oct 11 04:59:59 compute-0 podman[284156]: 2025-10-11 04:59:59.657009128 +0000 UTC m=+0.035487922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:59:59 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:59:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/921b0b4587fe1aa5e5ec6c6d228dbf9f742408624070ca66a335aedd6bd9d82a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:59:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/921b0b4587fe1aa5e5ec6c6d228dbf9f742408624070ca66a335aedd6bd9d82a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:59:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/921b0b4587fe1aa5e5ec6c6d228dbf9f742408624070ca66a335aedd6bd9d82a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:59:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/921b0b4587fe1aa5e5ec6c6d228dbf9f742408624070ca66a335aedd6bd9d82a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:59:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/921b0b4587fe1aa5e5ec6c6d228dbf9f742408624070ca66a335aedd6bd9d82a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:59:59 compute-0 podman[284156]: 2025-10-11 04:59:59.781088645 +0000 UTC m=+0.159567499 container init 5c2fd0f787b1f21544fed90cb79f6e8b8ee84f41a9d95bbb5dd363bf679f10b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:59:59 compute-0 podman[284156]: 2025-10-11 04:59:59.792047347 +0000 UTC m=+0.170526121 container start 5c2fd0f787b1f21544fed90cb79f6e8b8ee84f41a9d95bbb5dd363bf679f10b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 04:59:59 compute-0 podman[284156]: 2025-10-11 04:59:59.795521853 +0000 UTC m=+0.174000727 container attach 5c2fd0f787b1f21544fed90cb79f6e8b8ee84f41a9d95bbb5dd363bf679f10b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 05:00:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:00:00 compute-0 ceph-mon[74243]: pgmap v1172: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:00 compute-0 competent_merkle[284173]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:00:00 compute-0 competent_merkle[284173]: --> relative data size: 1.0
Oct 11 05:00:00 compute-0 competent_merkle[284173]: --> All data devices are unavailable
Oct 11 05:00:00 compute-0 systemd[1]: libpod-5c2fd0f787b1f21544fed90cb79f6e8b8ee84f41a9d95bbb5dd363bf679f10b8.scope: Deactivated successfully.
Oct 11 05:00:00 compute-0 podman[284156]: 2025-10-11 05:00:00.90117914 +0000 UTC m=+1.279657964 container died 5c2fd0f787b1f21544fed90cb79f6e8b8ee84f41a9d95bbb5dd363bf679f10b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_merkle, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 05:00:00 compute-0 systemd[1]: libpod-5c2fd0f787b1f21544fed90cb79f6e8b8ee84f41a9d95bbb5dd363bf679f10b8.scope: Consumed 1.058s CPU time.
Oct 11 05:00:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-921b0b4587fe1aa5e5ec6c6d228dbf9f742408624070ca66a335aedd6bd9d82a-merged.mount: Deactivated successfully.
Oct 11 05:00:00 compute-0 podman[284156]: 2025-10-11 05:00:00.976958609 +0000 UTC m=+1.355437413 container remove 5c2fd0f787b1f21544fed90cb79f6e8b8ee84f41a9d95bbb5dd363bf679f10b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:00:00 compute-0 systemd[1]: libpod-conmon-5c2fd0f787b1f21544fed90cb79f6e8b8ee84f41a9d95bbb5dd363bf679f10b8.scope: Deactivated successfully.
Oct 11 05:00:01 compute-0 sudo[284049]: pam_unix(sudo:session): session closed for user root
Oct 11 05:00:01 compute-0 sudo[284212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 05:00:01 compute-0 sudo[284212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 05:00:01 compute-0 sudo[284212]: pam_unix(sudo:session): session closed for user root
Oct 11 05:00:01 compute-0 sudo[284237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 05:00:01 compute-0 sudo[284237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 05:00:01 compute-0 sudo[284237]: pam_unix(sudo:session): session closed for user root
Oct 11 05:00:01 compute-0 sudo[284262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 05:00:01 compute-0 sudo[284262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 05:00:01 compute-0 sudo[284262]: pam_unix(sudo:session): session closed for user root
Oct 11 05:00:01 compute-0 sudo[284287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- lvm list --format json
Oct 11 05:00:01 compute-0 sudo[284287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 05:00:01 compute-0 podman[284353]: 2025-10-11 05:00:01.816045832 +0000 UTC m=+0.060962793 container create 20315f077f63dc02294fcc93862665db87b17d7432d78c86410df0f1b2a911b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:00:01 compute-0 systemd[1]: Started libpod-conmon-20315f077f63dc02294fcc93862665db87b17d7432d78c86410df0f1b2a911b6.scope.
Oct 11 05:00:01 compute-0 podman[284353]: 2025-10-11 05:00:01.794065137 +0000 UTC m=+0.038982198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:00:01 compute-0 systemd[1]: Started libcrun container.
Oct 11 05:00:01 compute-0 podman[284353]: 2025-10-11 05:00:01.912520275 +0000 UTC m=+0.157437266 container init 20315f077f63dc02294fcc93862665db87b17d7432d78c86410df0f1b2a911b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_khorana, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:00:01 compute-0 podman[284353]: 2025-10-11 05:00:01.925320653 +0000 UTC m=+0.170237644 container start 20315f077f63dc02294fcc93862665db87b17d7432d78c86410df0f1b2a911b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:00:01 compute-0 podman[284353]: 2025-10-11 05:00:01.929555958 +0000 UTC m=+0.174473009 container attach 20315f077f63dc02294fcc93862665db87b17d7432d78c86410df0f1b2a911b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:00:01 compute-0 eager_khorana[284370]: 167 167
Oct 11 05:00:01 compute-0 systemd[1]: libpod-20315f077f63dc02294fcc93862665db87b17d7432d78c86410df0f1b2a911b6.scope: Deactivated successfully.
Oct 11 05:00:01 compute-0 podman[284353]: 2025-10-11 05:00:01.932297576 +0000 UTC m=+0.177214567 container died 20315f077f63dc02294fcc93862665db87b17d7432d78c86410df0f1b2a911b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_khorana, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 05:00:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a6c6e456233a3360770b7b7596533632cabc7cf0c5f2912d3838011eb9de2dd-merged.mount: Deactivated successfully.
Oct 11 05:00:01 compute-0 podman[284353]: 2025-10-11 05:00:01.987522876 +0000 UTC m=+0.232439877 container remove 20315f077f63dc02294fcc93862665db87b17d7432d78c86410df0f1b2a911b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:00:02 compute-0 systemd[1]: libpod-conmon-20315f077f63dc02294fcc93862665db87b17d7432d78c86410df0f1b2a911b6.scope: Deactivated successfully.
Oct 11 05:00:02 compute-0 podman[284396]: 2025-10-11 05:00:02.225319425 +0000 UTC m=+0.060534523 container create 3e42dc791fadfb8df3149a4de0ebe388e94308cd0f2882f440f4abf7f64bd651 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ritchie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 05:00:02 compute-0 systemd[1]: Started libpod-conmon-3e42dc791fadfb8df3149a4de0ebe388e94308cd0f2882f440f4abf7f64bd651.scope.
Oct 11 05:00:02 compute-0 podman[284396]: 2025-10-11 05:00:02.204368345 +0000 UTC m=+0.039583483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:00:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 05:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b01e99fc898c06d55a3059854a5feb1b845add3bc53774c6c2512b5a2b0c620/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b01e99fc898c06d55a3059854a5feb1b845add3bc53774c6c2512b5a2b0c620/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b01e99fc898c06d55a3059854a5feb1b845add3bc53774c6c2512b5a2b0c620/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b01e99fc898c06d55a3059854a5feb1b845add3bc53774c6c2512b5a2b0c620/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:00:02 compute-0 podman[284396]: 2025-10-11 05:00:02.335566949 +0000 UTC m=+0.170782097 container init 3e42dc791fadfb8df3149a4de0ebe388e94308cd0f2882f440f4abf7f64bd651 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:00:02 compute-0 podman[284396]: 2025-10-11 05:00:02.346358417 +0000 UTC m=+0.181573515 container start 3e42dc791fadfb8df3149a4de0ebe388e94308cd0f2882f440f4abf7f64bd651 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:00:02 compute-0 podman[284396]: 2025-10-11 05:00:02.350162811 +0000 UTC m=+0.185377949 container attach 3e42dc791fadfb8df3149a4de0ebe388e94308cd0f2882f440f4abf7f64bd651 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ritchie, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:00:02 compute-0 ceph-mon[74243]: pgmap v1173: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:00:03 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/893450616' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:00:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:00:03 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/893450616' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]: {
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:     "0": [
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:         {
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "devices": [
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "/dev/loop3"
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             ],
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "lv_name": "ceph_lv0",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "lv_size": "21470642176",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29ed28f5-c2da-4c6f-bb64-dc7391248f4a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "lv_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "name": "ceph_lv0",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "tags": {
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.block_uuid": "SWMD0q-1AzW-R5sq-p1Kx-tD5V-LaV9-29t8Kf",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.cluster_name": "ceph",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.crush_device_class": "",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.encrypted": "0",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.osd_fsid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.osd_id": "0",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.type": "block",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.vdo": "0"
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             },
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "type": "block",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "vg_name": "ceph_vg0"
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:         }
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:     ],
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:     "1": [
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:         {
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "devices": [
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "/dev/loop4"
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             ],
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "lv_name": "ceph_lv1",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "lv_size": "21470642176",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=235849fc-4683-43e5-9b6a-a0d6f8d1cee8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "lv_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "name": "ceph_lv1",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "tags": {
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.block_uuid": "N1L1G7-fAC3-xAp0-k03o-feGN-gxzW-Tdcgol",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.cluster_name": "ceph",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.crush_device_class": "",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.encrypted": "0",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.osd_fsid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.osd_id": "1",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.type": "block",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.vdo": "0"
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             },
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "type": "block",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "vg_name": "ceph_vg1"
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:         }
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:     ],
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:     "2": [
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:         {
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "devices": [
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "/dev/loop5"
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             ],
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "lv_name": "ceph_lv2",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "lv_size": "21470642176",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=166d0489-2ae7-59eb-961c-c1b5cda4b45a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=30486846-2cc7-4787-8e36-0fb42ef328c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "lv_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "name": "ceph_lv2",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "tags": {
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.block_uuid": "HVoWav-sD2n-gA1S-XtWd-2FOw-dCO0-6LNC8a",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.cluster_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.cluster_name": "ceph",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.crush_device_class": "",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.encrypted": "0",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.osd_fsid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.osd_id": "2",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.type": "block",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:                 "ceph.vdo": "0"
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             },
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "type": "block",
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:             "vg_name": "ceph_vg2"
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:         }
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]:     ]
Oct 11 05:00:03 compute-0 peaceful_ritchie[284415]: }
Oct 11 05:00:03 compute-0 systemd[1]: libpod-3e42dc791fadfb8df3149a4de0ebe388e94308cd0f2882f440f4abf7f64bd651.scope: Deactivated successfully.
Oct 11 05:00:03 compute-0 podman[284396]: 2025-10-11 05:00:03.175834683 +0000 UTC m=+1.011049821 container died 3e42dc791fadfb8df3149a4de0ebe388e94308cd0f2882f440f4abf7f64bd651 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 05:00:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b01e99fc898c06d55a3059854a5feb1b845add3bc53774c6c2512b5a2b0c620-merged.mount: Deactivated successfully.
Oct 11 05:00:03 compute-0 podman[284396]: 2025-10-11 05:00:03.252662298 +0000 UTC m=+1.087877396 container remove 3e42dc791fadfb8df3149a4de0ebe388e94308cd0f2882f440f4abf7f64bd651 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 05:00:03 compute-0 systemd[1]: libpod-conmon-3e42dc791fadfb8df3149a4de0ebe388e94308cd0f2882f440f4abf7f64bd651.scope: Deactivated successfully.
Oct 11 05:00:03 compute-0 sudo[284287]: pam_unix(sudo:session): session closed for user root
Oct 11 05:00:03 compute-0 sudo[284436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 05:00:03 compute-0 sudo[284436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 05:00:03 compute-0 sudo[284436]: pam_unix(sudo:session): session closed for user root
Oct 11 05:00:03 compute-0 sudo[284461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 05:00:03 compute-0 sudo[284461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 05:00:03 compute-0 sudo[284461]: pam_unix(sudo:session): session closed for user root
Oct 11 05:00:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/893450616' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:00:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/893450616' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:00:03 compute-0 sudo[284486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 05:00:03 compute-0 sudo[284486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 05:00:03 compute-0 sudo[284486]: pam_unix(sudo:session): session closed for user root
Oct 11 05:00:03 compute-0 sudo[284511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/166d0489-2ae7-59eb-961c-c1b5cda4b45a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 166d0489-2ae7-59eb-961c-c1b5cda4b45a -- raw list --format json
Oct 11 05:00:03 compute-0 sudo[284511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 05:00:04 compute-0 podman[284576]: 2025-10-11 05:00:04.042969012 +0000 UTC m=+0.046137575 container create a0c57ee953b944807d754ce3a35ee169e6d065aab674f60e36f84730518ccc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 05:00:04 compute-0 systemd[1]: Started libpod-conmon-a0c57ee953b944807d754ce3a35ee169e6d065aab674f60e36f84730518ccc6f.scope.
Oct 11 05:00:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 05:00:04 compute-0 podman[284576]: 2025-10-11 05:00:04.022886874 +0000 UTC m=+0.026055467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:00:04 compute-0 podman[284576]: 2025-10-11 05:00:04.118525876 +0000 UTC m=+0.121694439 container init a0c57ee953b944807d754ce3a35ee169e6d065aab674f60e36f84730518ccc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct 11 05:00:04 compute-0 podman[284576]: 2025-10-11 05:00:04.129509839 +0000 UTC m=+0.132678392 container start a0c57ee953b944807d754ce3a35ee169e6d065aab674f60e36f84730518ccc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_albattani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:00:04 compute-0 practical_albattani[284592]: 167 167
Oct 11 05:00:04 compute-0 podman[284576]: 2025-10-11 05:00:04.132606336 +0000 UTC m=+0.135774889 container attach a0c57ee953b944807d754ce3a35ee169e6d065aab674f60e36f84730518ccc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_albattani, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:00:04 compute-0 systemd[1]: libpod-a0c57ee953b944807d754ce3a35ee169e6d065aab674f60e36f84730518ccc6f.scope: Deactivated successfully.
Oct 11 05:00:04 compute-0 podman[284576]: 2025-10-11 05:00:04.133549159 +0000 UTC m=+0.136717712 container died a0c57ee953b944807d754ce3a35ee169e6d065aab674f60e36f84730518ccc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 05:00:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb016c0319cf6edb47f59687aaf116ca9b7cf606de9360442467ec10df0370cb-merged.mount: Deactivated successfully.
Oct 11 05:00:04 compute-0 podman[284576]: 2025-10-11 05:00:04.169971013 +0000 UTC m=+0.173139576 container remove a0c57ee953b944807d754ce3a35ee169e6d065aab674f60e36f84730518ccc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_albattani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 05:00:04 compute-0 systemd[1]: libpod-conmon-a0c57ee953b944807d754ce3a35ee169e6d065aab674f60e36f84730518ccc6f.scope: Deactivated successfully.
Oct 11 05:00:04 compute-0 podman[284614]: 2025-10-11 05:00:04.392294467 +0000 UTC m=+0.033851250 container create 0c12fcb2a90985d67f12dfd3ce1a55a454fb5f17047d1b89cbebf57fe8d7d8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:00:04 compute-0 systemd[1]: Started libpod-conmon-0c12fcb2a90985d67f12dfd3ce1a55a454fb5f17047d1b89cbebf57fe8d7d8fa.scope.
Oct 11 05:00:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 05:00:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34b9fd41c5303eb3f45e8fef4f9c811d73e41b00c50c151f7ca01138b4d3e569/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:00:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34b9fd41c5303eb3f45e8fef4f9c811d73e41b00c50c151f7ca01138b4d3e569/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:00:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34b9fd41c5303eb3f45e8fef4f9c811d73e41b00c50c151f7ca01138b4d3e569/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:00:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34b9fd41c5303eb3f45e8fef4f9c811d73e41b00c50c151f7ca01138b4d3e569/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:00:04 compute-0 podman[284614]: 2025-10-11 05:00:04.377282615 +0000 UTC m=+0.018839418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:00:04 compute-0 podman[284614]: 2025-10-11 05:00:04.481821778 +0000 UTC m=+0.123378581 container init 0c12fcb2a90985d67f12dfd3ce1a55a454fb5f17047d1b89cbebf57fe8d7d8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_antonelli, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:00:04 compute-0 podman[284614]: 2025-10-11 05:00:04.492993425 +0000 UTC m=+0.134550238 container start 0c12fcb2a90985d67f12dfd3ce1a55a454fb5f17047d1b89cbebf57fe8d7d8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 05:00:04 compute-0 podman[284614]: 2025-10-11 05:00:04.496433501 +0000 UTC m=+0.137990284 container attach 0c12fcb2a90985d67f12dfd3ce1a55a454fb5f17047d1b89cbebf57fe8d7d8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:00:04 compute-0 ceph-mon[74243]: pgmap v1174: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:00:05 compute-0 podman[284656]: 2025-10-11 05:00:05.425869105 +0000 UTC m=+0.074560411 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]: {
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:     "235849fc-4683-43e5-9b6a-a0d6f8d1cee8": {
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:         "osd_id": 1,
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:         "osd_uuid": "235849fc-4683-43e5-9b6a-a0d6f8d1cee8",
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:         "type": "bluestore"
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:     },
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:     "29ed28f5-c2da-4c6f-bb64-dc7391248f4a": {
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:         "osd_id": 0,
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:         "osd_uuid": "29ed28f5-c2da-4c6f-bb64-dc7391248f4a",
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:         "type": "bluestore"
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:     },
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:     "30486846-2cc7-4787-8e36-0fb42ef328c5": {
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:         "ceph_fsid": "166d0489-2ae7-59eb-961c-c1b5cda4b45a",
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:         "osd_id": 2,
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:         "osd_uuid": "30486846-2cc7-4787-8e36-0fb42ef328c5",
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:         "type": "bluestore"
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]:     }
Oct 11 05:00:05 compute-0 reverent_antonelli[284630]: }
Oct 11 05:00:05 compute-0 podman[284652]: 2025-10-11 05:00:05.458850433 +0000 UTC m=+0.112144183 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3)
Oct 11 05:00:05 compute-0 systemd[1]: libpod-0c12fcb2a90985d67f12dfd3ce1a55a454fb5f17047d1b89cbebf57fe8d7d8fa.scope: Deactivated successfully.
Oct 11 05:00:05 compute-0 podman[284614]: 2025-10-11 05:00:05.488371725 +0000 UTC m=+1.129928508 container died 0c12fcb2a90985d67f12dfd3ce1a55a454fb5f17047d1b89cbebf57fe8d7d8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:00:05 compute-0 podman[284657]: 2025-10-11 05:00:05.508423863 +0000 UTC m=+0.155093709 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:00:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-34b9fd41c5303eb3f45e8fef4f9c811d73e41b00c50c151f7ca01138b4d3e569-merged.mount: Deactivated successfully.
Oct 11 05:00:05 compute-0 podman[284614]: 2025-10-11 05:00:05.569121038 +0000 UTC m=+1.210677821 container remove 0c12fcb2a90985d67f12dfd3ce1a55a454fb5f17047d1b89cbebf57fe8d7d8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 05:00:05 compute-0 systemd[1]: libpod-conmon-0c12fcb2a90985d67f12dfd3ce1a55a454fb5f17047d1b89cbebf57fe8d7d8fa.scope: Deactivated successfully.
Oct 11 05:00:05 compute-0 sudo[284511]: pam_unix(sudo:session): session closed for user root
Oct 11 05:00:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:00:05 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 05:00:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:00:05 compute-0 ceph-mon[74243]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 05:00:05 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev 30024736-3466-42be-8e13-3edcf0a9ad76 does not exist
Oct 11 05:00:05 compute-0 ceph-mgr[74542]: [progress WARNING root] complete: ev dcbfc1fb-7ab6-4c8d-90e6-29823d399aac does not exist
Oct 11 05:00:05 compute-0 sudo[284736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 05:00:05 compute-0 sudo[284736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 05:00:05 compute-0 sudo[284736]: pam_unix(sudo:session): session closed for user root
Oct 11 05:00:05 compute-0 sudo[284761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 05:00:05 compute-0 sudo[284761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 05:00:05 compute-0 sudo[284761]: pam_unix(sudo:session): session closed for user root
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:00:06 compute-0 ceph-mon[74243]: pgmap v1175: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:06 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 05:00:06 compute-0 ceph-mon[74243]: from='mgr.14132 192.168.122.100:0/3855071230' entity='mgr.compute-0.phooxi' 
Oct 11 05:00:06 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:08 compute-0 ceph-mon[74243]: pgmap v1176: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:08 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:10 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:00:10 compute-0 ceph-mon[74243]: pgmap v1177: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:10 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 05:00:11.027 161813 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 05:00:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 05:00:11.028 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 05:00:11 compute-0 ovn_metadata_agent[161792]: 2025-10-11 05:00:11.028 161813 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 05:00:12 compute-0 ceph-mon[74243]: pgmap v1178: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:12 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:14 compute-0 podman[284786]: 2025-10-11 05:00:14.457317912 +0000 UTC m=+0.103539499 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 11 05:00:14 compute-0 ceph-mon[74243]: pgmap v1179: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:14 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:15 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:00:16 compute-0 ceph-mon[74243]: pgmap v1180: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:16 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:18 compute-0 ceph-mon[74243]: pgmap v1181: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:18 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:20 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:00:20 compute-0 ceph-mon[74243]: pgmap v1182: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:20 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:22 compute-0 nova_compute[259400]: 2025-10-11 05:00:22.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 05:00:22 compute-0 nova_compute[259400]: 2025-10-11 05:00:22.196 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 05:00:22 compute-0 nova_compute[259400]: 2025-10-11 05:00:22.196 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 05:00:22 compute-0 nova_compute[259400]: 2025-10-11 05:00:22.225 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 05:00:22 compute-0 nova_compute[259400]: 2025-10-11 05:00:22.225 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 05:00:22 compute-0 nova_compute[259400]: 2025-10-11 05:00:22.225 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 05:00:22 compute-0 nova_compute[259400]: 2025-10-11 05:00:22.226 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 05:00:22 compute-0 nova_compute[259400]: 2025-10-11 05:00:22.226 2 DEBUG nova.compute.manager [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 05:00:22 compute-0 ceph-mon[74243]: pgmap v1183: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:22 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:24 compute-0 nova_compute[259400]: 2025-10-11 05:00:24.197 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 05:00:24 compute-0 nova_compute[259400]: 2025-10-11 05:00:24.219 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 05:00:24 compute-0 ceph-mon[74243]: pgmap v1184: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:24 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:25 compute-0 nova_compute[259400]: 2025-10-11 05:00:25.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 05:00:25 compute-0 nova_compute[259400]: 2025-10-11 05:00:25.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 05:00:25 compute-0 nova_compute[259400]: 2025-10-11 05:00:25.196 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 05:00:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:00:25 compute-0 nova_compute[259400]: 2025-10-11 05:00:25.242 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 05:00:25 compute-0 nova_compute[259400]: 2025-10-11 05:00:25.242 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 05:00:25 compute-0 nova_compute[259400]: 2025-10-11 05:00:25.242 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 05:00:25 compute-0 nova_compute[259400]: 2025-10-11 05:00:25.243 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 05:00:25 compute-0 nova_compute[259400]: 2025-10-11 05:00:25.243 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 05:00:25 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:00:25 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1258959644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:00:25 compute-0 nova_compute[259400]: 2025-10-11 05:00:25.692 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 05:00:25 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1258959644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:00:25 compute-0 nova_compute[259400]: 2025-10-11 05:00:25.945 2 WARNING nova.virt.libvirt.driver [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 05:00:25 compute-0 nova_compute[259400]: 2025-10-11 05:00:25.947 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4980MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 05:00:25 compute-0 nova_compute[259400]: 2025-10-11 05:00:25.947 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 05:00:25 compute-0 nova_compute[259400]: 2025-10-11 05:00:25.948 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 05:00:26 compute-0 nova_compute[259400]: 2025-10-11 05:00:26.030 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 05:00:26 compute-0 nova_compute[259400]: 2025-10-11 05:00:26.031 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 05:00:26 compute-0 nova_compute[259400]: 2025-10-11 05:00:26.047 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 05:00:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:00:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:00:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:00:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:00:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:00:26 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:00:26 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:00:26 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2469559356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:00:26 compute-0 nova_compute[259400]: 2025-10-11 05:00:26.457 2 DEBUG oslo_concurrency.processutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 05:00:26 compute-0 nova_compute[259400]: 2025-10-11 05:00:26.465 2 DEBUG nova.compute.provider_tree [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f05a244-23b6-4149-9b5a-a525e5860d18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 05:00:26 compute-0 nova_compute[259400]: 2025-10-11 05:00:26.486 2 DEBUG nova.scheduler.client.report [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Inventory has not changed for provider 1f05a244-23b6-4149-9b5a-a525e5860d18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 05:00:26 compute-0 nova_compute[259400]: 2025-10-11 05:00:26.488 2 DEBUG nova.compute.resource_tracker [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 05:00:26 compute-0 nova_compute[259400]: 2025-10-11 05:00:26.489 2 DEBUG oslo_concurrency.lockutils [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 05:00:26 compute-0 ceph-mon[74243]: pgmap v1185: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:26 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2469559356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:00:26 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:28 compute-0 nova_compute[259400]: 2025-10-11 05:00:28.489 2 DEBUG oslo_service.periodic_task [None req-0cc217e1-ea00-4246-bcf6-777092f39443 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 05:00:28 compute-0 ceph-mon[74243]: pgmap v1186: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:28 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:30 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:00:30 compute-0 ceph-mon[74243]: pgmap v1187: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:30 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:32 compute-0 ceph-mon[74243]: pgmap v1188: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:32 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:34 compute-0 ceph-mon[74243]: pgmap v1189: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:34 compute-0 sshd-session[284848]: Accepted publickey for zuul from 192.168.122.10 port 43300 ssh2: ECDSA SHA256:fsXfQ2jtOYwhi5zevC2AykH2PVPp1BPcbFOVNPoZIaQ
Oct 11 05:00:34 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:34 compute-0 systemd-logind[801]: New session 57 of user zuul.
Oct 11 05:00:34 compute-0 systemd[1]: Started Session 57 of User zuul.
Oct 11 05:00:34 compute-0 sshd-session[284848]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 05:00:35 compute-0 sudo[284852]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 11 05:00:35 compute-0 sudo[284852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 05:00:35 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:00:36 compute-0 podman[284889]: 2025-10-11 05:00:36.273905369 +0000 UTC m=+0.112408909 container health_status 981a12d55b51d26e636fa4bc8b08d383168dc6afe664f12473554b2b0c589967 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:00:36 compute-0 podman[284887]: 2025-10-11 05:00:36.297223508 +0000 UTC m=+0.137605615 container health_status 4f5a84ad32cb899a70b226fbd9c8f419e9440a5ac99b188805a8bf8e9253a2d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 11 05:00:36 compute-0 podman[284886]: 2025-10-11 05:00:36.324754411 +0000 UTC m=+0.167682291 container health_status 086abfbe8dbe9e4742c5d0e8540a7653c1c0a5c00ecda3b6e948040a718938cb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 05:00:36 compute-0 ceph-mon[74243]: pgmap v1190: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:36 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:37 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15031 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:38 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15033 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:38 compute-0 ceph-mon[74243]: pgmap v1191: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:38 compute-0 ceph-mon[74243]: from='client.15031 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:38 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:38 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 11 05:00:38 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2348098867' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 05:00:39 compute-0 ceph-mon[74243]: from='client.15033 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:39 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2348098867' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 05:00:40 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:00:40 compute-0 ceph-mon[74243]: pgmap v1192: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:40 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:41 compute-0 ovs-vsctl[285202]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 11 05:00:42 compute-0 virtqemud[259122]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 11 05:00:42 compute-0 ceph-mon[74243]: pgmap v1193: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:42 compute-0 virtqemud[259122]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 11 05:00:42 compute-0 virtqemud[259122]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 11 05:00:42 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:43 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: cache status {prefix=cache status} (starting...)
Oct 11 05:00:43 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: client ls {prefix=client ls} (starting...)
Oct 11 05:00:43 compute-0 lvm[285536]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 11 05:00:43 compute-0 lvm[285536]: VG ceph_vg0 finished
Oct 11 05:00:43 compute-0 lvm[285558]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 11 05:00:43 compute-0 lvm[285558]: VG ceph_vg1 finished
Oct 11 05:00:43 compute-0 lvm[285569]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 05:00:43 compute-0 lvm[285569]: VG ceph_vg2 finished
Oct 11 05:00:44 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15037 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:44 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: damage ls {prefix=damage ls} (starting...)
Oct 11 05:00:44 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: dump loads {prefix=dump loads} (starting...)
Oct 11 05:00:44 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15039 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:44 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 11 05:00:44 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 11 05:00:44 compute-0 ceph-mon[74243]: pgmap v1194: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:44 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 11 05:00:44 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 11 05:00:44 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1784726057' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 11 05:00:44 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:45 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 11 05:00:45 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15045 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:45 compute-0 ceph-mgr[74542]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 05:00:45 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T05:00:45.199+0000 7fe5f0a93640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 05:00:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:00:45 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 11 05:00:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:00:45 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1762264259' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:00:45 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 11 05:00:45 compute-0 podman[285783]: 2025-10-11 05:00:45.432467169 +0000 UTC m=+0.081641956 container health_status 7d738271425699243ade5f883e5c9759d51b96fd0ce562173990a66d2fbaffdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 05:00:45 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: ops {prefix=ops} (starting...)
Oct 11 05:00:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 11 05:00:45 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2246435858' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 11 05:00:45 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 11 05:00:45 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4105958954' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 11 05:00:45 compute-0 ceph-mon[74243]: from='client.15037 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:45 compute-0 ceph-mon[74243]: from='client.15039 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:45 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1784726057' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 11 05:00:45 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1762264259' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:00:45 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2246435858' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 11 05:00:45 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4105958954' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 11 05:00:46 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 11 05:00:46 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2854534328' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 11 05:00:46 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 11 05:00:46 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/655324899' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 11 05:00:46 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: session ls {prefix=session ls} (starting...)
Oct 11 05:00:46 compute-0 ceph-mds[100196]: mds.cephfs.compute-0.jbpltj asok_command: status {prefix=status} (starting...)
Oct 11 05:00:46 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15059 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:46 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 11 05:00:46 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2710848355' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 11 05:00:46 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15062 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:46 compute-0 ceph-mon[74243]: pgmap v1195: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:46 compute-0 ceph-mon[74243]: from='client.15045 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:46 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2854534328' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 11 05:00:46 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/655324899' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 11 05:00:46 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2710848355' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 11 05:00:46 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 11 05:00:46 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/508968381' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 11 05:00:46 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:47 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 11 05:00:47 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2757905410' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 11 05:00:47 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 11 05:00:47 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2880992714' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 05:00:47 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 11 05:00:47 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2749273310' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 11 05:00:47 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 11 05:00:47 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4046538637' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 11 05:00:47 compute-0 ceph-mon[74243]: from='client.15059 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:47 compute-0 ceph-mon[74243]: from='client.15062 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:47 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/508968381' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 11 05:00:47 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2757905410' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 11 05:00:47 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2880992714' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 05:00:47 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2749273310' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 11 05:00:47 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4046538637' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 11 05:00:48 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15073 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:48 compute-0 ceph-mgr[74542]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 11 05:00:48 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T05:00:48.076+0000 7fe5f0a93640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 11 05:00:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 11 05:00:48 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2401853960' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 11 05:00:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 11 05:00:48 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2106112301' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 11 05:00:48 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15079 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:48 compute-0 ceph-mon[74243]: pgmap v1196: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:48 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2401853960' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 11 05:00:48 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2106112301' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 11 05:00:48 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15083 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:48 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 11 05:00:48 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3689593204' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 11 05:00:48 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:49 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15085 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 11 05:00:49 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3131099242' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 11 05:00:49 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15089 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:49 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 11 05:00:49 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3572303202' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 11 05:00:49 compute-0 ceph-mon[74243]: from='client.15073 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:49 compute-0 ceph-mon[74243]: from='client.15079 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:49 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3689593204' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 11 05:00:49 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3131099242' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 11 05:00:49 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3572303202' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 754074 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 540672 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 111 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998705 3 0.000102
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.001848 0 0.000000
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 85) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:35.619066+0000 osd.2 (osd.2) 84 : cluster [DBG] 3.1d scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:35.633160+0000 osd.2 (osd.2) 85 : cluster [DBG] 3.1d scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.007614 5 0.000424
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000125 1 0.000113
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000583 1 0.000070
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021917 7 0.000380
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.087342 2 0.000098
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.073577 1 0.000071
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: not registered w/ OSD
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] lb MIN local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=-1 lpr=111 DELETING pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.052015 2 0.000338
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] lb MIN local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.125733 0 0.000000
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 112 pg[9.1e( v 41'581 (0'0,41'581] lb MIN local-lis/les=109/110 n=6 ec=49/34 lis/c=109/67 les/c/f=110/68/0 sis=111) [0] r=-1 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.147869 0 0.000000
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: not registered w/ OSD
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:07.344040+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:36.591088+0000 osd.2 (osd.2) 86 : cluster [DBG] 3.1e scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:36.605301+0000 osd.2 (osd.2) 87 : cluster [DBG] 3.1e scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.15 deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.15 deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 483328 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 112 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.937553 1 0.000114
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary/Active 1.033595 0 0.000000
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started/Primary 2.035501 0 0.000000
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] exit Started 2.035545 0 0.000000
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=111) [1]/[2] async=[1] r=0 lpr=111 pi=[67,111)/1 crt=41'581 mlcod 41'581 active+remapped mbc={255={}}] enter Reset
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113 pruub=14.973883629s) [1] async=[1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 41'581 active pruub 171.660278320s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113 pruub=14.973669052s) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 171.660278320s@ mbc={}] exit Reset 0.000297 1 0.000426
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113 pruub=14.973669052s) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 171.660278320s@ mbc={}] enter Started
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113 pruub=14.973669052s) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 171.660278320s@ mbc={}] enter Start
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113 pruub=14.973669052s) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 171.660278320s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 87) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:36.591088+0000 osd.2 (osd.2) 86 : cluster [DBG] 3.1e scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:36.605301+0000 osd.2 (osd.2) 87 : cluster [DBG] 3.1e scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113 pruub=14.973669052s) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 171.660278320s@ mbc={}] exit Start 0.000104 0 0.000000
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 113 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113 pruub=14.973669052s) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY pruub 171.660278320s@ mbc={}] enter Started/Stray
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:08.344585+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:37.577050+0000 osd.2 (osd.2) 88 : cluster [DBG] 8.15 deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:37.591282+0000 osd.2 (osd.2) 89 : cluster [DBG] 8.15 deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 475136 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 89) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:37.577050+0000 osd.2 (osd.2) 88 : cluster [DBG] 8.15 deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:37.591282+0000 osd.2 (osd.2) 89 : cluster [DBG] 8.15 deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:09.345210+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.559062 6 0.000318
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000777 1 0.000077
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] lb MIN local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=-1 lpr=113 DELETING pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.050146 3 0.000203
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] lb MIN local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.051003 0 0.000000
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 pg_epoch: 114 pg[9.1f( v 41'581 (0'0,41'581] lb MIN local-lis/les=111/112 n=6 ec=49/34 lis/c=111/67 les/c/f=112/68/0 sis=113) [1] r=-1 lpr=113 pi=[67,113)/1 crt=41'581 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.610282 0 0.000000
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcedb000/0x0/0x4ffc00000, data 0xab3b1/0x151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 434176 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:10.345698+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65609728 unmapped: 393216 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:11.346122+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 742780 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 401408 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:12.346365+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:41.474610+0000 osd.2 (osd.2) 90 : cluster [DBG] 11.15 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:41.488706+0000 osd.2 (osd.2) 91 : cluster [DBG] 11.15 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65609728 unmapped: 393216 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 91) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:41.474610+0000 osd.2 (osd.2) 90 : cluster [DBG] 11.15 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:41.488706+0000 osd.2 (osd.2) 91 : cluster [DBG] 11.15 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcedb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:13.346688+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65544192 unmapped: 458752 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcedb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:14.347100+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 450560 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:15.347447+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:44.508751+0000 osd.2 (osd.2) 92 : cluster [DBG] 8.11 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:44.526476+0000 osd.2 (osd.2) 93 : cluster [DBG] 8.11 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65560576 unmapped: 442368 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 93) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:44.508751+0000 osd.2 (osd.2) 92 : cluster [DBG] 8.11 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:44.526476+0000 osd.2 (osd.2) 93 : cluster [DBG] 8.11 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:16.347692+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:45.513163+0000 osd.2 (osd.2) 94 : cluster [DBG] 11.11 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:45.527365+0000 osd.2 (osd.2) 95 : cluster [DBG] 11.11 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.405490875s of 10.618983269s, submitted: 38
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746225 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 425984 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 95) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:45.513163+0000 osd.2 (osd.2) 94 : cluster [DBG] 11.11 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:45.527365+0000 osd.2 (osd.2) 95 : cluster [DBG] 11.11 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:17.348004+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:46.552664+0000 osd.2 (osd.2) 96 : cluster [DBG] 8.12 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:46.566831+0000 osd.2 (osd.2) 97 : cluster [DBG] 8.12 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 425984 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 97) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:46.552664+0000 osd.2 (osd.2) 96 : cluster [DBG] 8.12 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:46.566831+0000 osd.2 (osd.2) 97 : cluster [DBG] 8.12 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:18.348266+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcedb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65585152 unmapped: 417792 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:19.348522+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:48.505433+0000 osd.2 (osd.2) 98 : cluster [DBG] 3.18 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:48.518766+0000 osd.2 (osd.2) 99 : cluster [DBG] 3.18 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcedb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 409600 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 99) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:48.505433+0000 osd.2 (osd.2) 98 : cluster [DBG] 3.18 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:48.518766+0000 osd.2 (osd.2) 99 : cluster [DBG] 3.18 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:20.348774+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:49.465899+0000 osd.2 (osd.2) 100 : cluster [DBG] 7.1c scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:49.480226+0000 osd.2 (osd.2) 101 : cluster [DBG] 7.1c scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 409600 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 101) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:49.465899+0000 osd.2 (osd.2) 100 : cluster [DBG] 7.1c scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:49.480226+0000 osd.2 (osd.2) 101 : cluster [DBG] 7.1c scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:21.349029+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 748521 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 409600 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:22.349197+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 401408 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:23.349534+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:52.476720+0000 osd.2 (osd.2) 102 : cluster [DBG] 7.2 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:52.490850+0000 osd.2 (osd.2) 103 : cluster [DBG] 7.2 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 401408 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 103) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:52.476720+0000 osd.2 (osd.2) 102 : cluster [DBG] 7.2 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:52.490850+0000 osd.2 (osd.2) 103 : cluster [DBG] 7.2 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:24.349730+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65609728 unmapped: 393216 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcedb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:25.349904+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65609728 unmapped: 393216 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:26.350066+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:55.498397+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.1 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:55.512534+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.1 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 750815 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 385024 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 105) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:55.498397+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.1 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:55.512534+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.1 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:27.350245+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.b scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.958892822s of 10.998286247s, submitted: 10
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.b scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 352256 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:28.350399+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:57.551158+0000 osd.2 (osd.2) 106 : cluster [DBG] 11.b scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:57.565258+0000 osd.2 (osd.2) 107 : cluster [DBG] 11.b scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 352256 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:29.350663+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 4 last_log 109 sent 107 num 4 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:58.543885+0000 osd.2 (osd.2) 108 : cluster [DBG] 7.5 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:29:58.557977+0000 osd.2 (osd.2) 109 : cluster [DBG] 7.5 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 107) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:57.551158+0000 osd.2 (osd.2) 106 : cluster [DBG] 11.b scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:57.565258+0000 osd.2 (osd.2) 107 : cluster [DBG] 11.b scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 109) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:58.543885+0000 osd.2 (osd.2) 108 : cluster [DBG] 7.5 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:29:58.557977+0000 osd.2 (osd.2) 109 : cluster [DBG] 7.5 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 344064 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:30.350909+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 344064 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:31.351069+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 753110 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 335872 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:32.351317+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 327680 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:33.351562+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 327680 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:34.351706+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.d scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.d scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 319488 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:35.351962+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:04.481501+0000 osd.2 (osd.2) 110 : cluster [DBG] 11.d scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:04.495622+0000 osd.2 (osd.2) 111 : cluster [DBG] 11.d scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 111) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:04.481501+0000 osd.2 (osd.2) 110 : cluster [DBG] 11.d scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:04.495622+0000 osd.2 (osd.2) 111 : cluster [DBG] 11.d scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 319488 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:36.352160+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 754258 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 311296 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:37.352378+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65699840 unmapped: 303104 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:38.352605+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 294912 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:39.352764+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 294912 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:40.352853+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.896081924s of 12.917864799s, submitted: 6
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 286720 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:41.352971+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:10.469120+0000 osd.2 (osd.2) 112 : cluster [DBG] 11.9 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:10.486821+0000 osd.2 (osd.2) 113 : cluster [DBG] 11.9 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 113) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:10.469120+0000 osd.2 (osd.2) 112 : cluster [DBG] 11.9 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:10.486821+0000 osd.2 (osd.2) 113 : cluster [DBG] 11.9 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 756553 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 286720 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:42.353171+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:11.426990+0000 osd.2 (osd.2) 114 : cluster [DBG] 7.e scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:11.441118+0000 osd.2 (osd.2) 115 : cluster [DBG] 7.e scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 115) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:11.426990+0000 osd.2 (osd.2) 114 : cluster [DBG] 7.e scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:11.441118+0000 osd.2 (osd.2) 115 : cluster [DBG] 7.e scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 278528 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:43.353383+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:12.456565+0000 osd.2 (osd.2) 116 : cluster [DBG] 7.c scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:12.470558+0000 osd.2 (osd.2) 117 : cluster [DBG] 7.c scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 117) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:12.456565+0000 osd.2 (osd.2) 116 : cluster [DBG] 7.c scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:12.470558+0000 osd.2 (osd.2) 117 : cluster [DBG] 7.c scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 270336 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:44.353535+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 270336 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:45.353825+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 270336 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:46.353947+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 757700 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 270336 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:47.354076+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 245760 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:48.354213+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 245760 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:49.354386+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 229376 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:50.354529+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:19.469498+0000 osd.2 (osd.2) 118 : cluster [DBG] 11.3 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:19.483592+0000 osd.2 (osd.2) 119 : cluster [DBG] 11.3 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.956985474s of 10.002155304s, submitted: 10
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 119) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:19.469498+0000 osd.2 (osd.2) 118 : cluster [DBG] 11.3 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:19.483592+0000 osd.2 (osd.2) 119 : cluster [DBG] 11.3 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 229376 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:51.355015+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:20.456889+0000 osd.2 (osd.2) 120 : cluster [DBG] 11.2 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:20.470978+0000 osd.2 (osd.2) 121 : cluster [DBG] 11.2 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 761144 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 121) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:20.456889+0000 osd.2 (osd.2) 120 : cluster [DBG] 11.2 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:20.470978+0000 osd.2 (osd.2) 121 : cluster [DBG] 11.2 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 221184 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:52.355218+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:21.504797+0000 osd.2 (osd.2) 122 : cluster [DBG] 11.8 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:21.518897+0000 osd.2 (osd.2) 123 : cluster [DBG] 11.8 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 123) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:21.504797+0000 osd.2 (osd.2) 122 : cluster [DBG] 11.8 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:21.518897+0000 osd.2 (osd.2) 123 : cluster [DBG] 11.8 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 221184 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:53.355442+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:22.533667+0000 osd.2 (osd.2) 124 : cluster [DBG] 8.2 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:22.547784+0000 osd.2 (osd.2) 125 : cluster [DBG] 8.2 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 125) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:22.533667+0000 osd.2 (osd.2) 124 : cluster [DBG] 8.2 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:22.547784+0000 osd.2 (osd.2) 125 : cluster [DBG] 8.2 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 212992 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:54.356141+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 212992 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:55.356280+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 212992 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:56.356432+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 763438 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:57.356559+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:26.608859+0000 osd.2 (osd.2) 126 : cluster [DBG] 7.8 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:26.622950+0000 osd.2 (osd.2) 127 : cluster [DBG] 7.8 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 127) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:26.608859+0000 osd.2 (osd.2) 126 : cluster [DBG] 7.8 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:26.622950+0000 osd.2 (osd.2) 127 : cluster [DBG] 7.8 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 204800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:58.356745+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.d scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.d scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 1253376 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:59.356864+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:28.669354+0000 osd.2 (osd.2) 128 : cluster [DBG] 8.d scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:28.683345+0000 osd.2 (osd.2) 129 : cluster [DBG] 8.d scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 129) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:28.669354+0000 osd.2 (osd.2) 128 : cluster [DBG] 8.d scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:28.683345+0000 osd.2 (osd.2) 129 : cluster [DBG] 8.d scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 1245184 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:00.357046+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 1245184 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:01.357177+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.e scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.152532578s of 11.190190315s, submitted: 8
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.e scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765732 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1236992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:02.357317+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:31.661526+0000 osd.2 (osd.2) 130 : cluster [DBG] 3.e scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:31.675674+0000 osd.2 (osd.2) 131 : cluster [DBG] 3.e scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 131) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:31.661526+0000 osd.2 (osd.2) 130 : cluster [DBG] 3.e scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:31.675674+0000 osd.2 (osd.2) 131 : cluster [DBG] 3.e scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1236992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:03.357569+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1236992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:04.357690+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:05.357859+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1228800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:06.357982+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1228800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.a deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.a deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766879 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:07.358151+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:36.669792+0000 osd.2 (osd.2) 132 : cluster [DBG] 7.a deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:36.683896+0000 osd.2 (osd.2) 133 : cluster [DBG] 7.a deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1204224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 133) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:36.669792+0000 osd.2 (osd.2) 132 : cluster [DBG] 7.a deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:36.683896+0000 osd.2 (osd.2) 133 : cluster [DBG] 7.a deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:08.358379+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 1187840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:09.358575+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1171456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:10.358737+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1171456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:11.358965+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:40.749254+0000 osd.2 (osd.2) 134 : cluster [DBG] 8.4 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:40.763599+0000 osd.2 (osd.2) 135 : cluster [DBG] 8.4 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1171456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768026 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:12.359231+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 135) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:40.749254+0000 osd.2 (osd.2) 134 : cluster [DBG] 8.4 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:40.763599+0000 osd.2 (osd.2) 135 : cluster [DBG] 8.4 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1163264 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:13.359406+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 1155072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:14.359560+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 1155072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:15.359750+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 1146880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.002936363s of 14.024413109s, submitted: 6
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:16.360118+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:45.685970+0000 osd.2 (osd.2) 136 : cluster [DBG] 8.1b scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:45.700065+0000 osd.2 (osd.2) 137 : cluster [DBG] 8.1b scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 1146880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 137) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:45.685970+0000 osd.2 (osd.2) 136 : cluster [DBG] 8.1b scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:45.700065+0000 osd.2 (osd.2) 137 : cluster [DBG] 8.1b scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 769174 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:17.360375+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 1138688 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:18.360550+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 1138688 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:19.360683+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 1138688 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:20.360782+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 1130496 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:21.360943+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 1130496 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 770323 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:22.361056+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:51.627977+0000 osd.2 (osd.2) 138 : cluster [DBG] 11.1a scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:51.642068+0000 osd.2 (osd.2) 139 : cluster [DBG] 11.1a scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1114112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 139) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:51.627977+0000 osd.2 (osd.2) 138 : cluster [DBG] 11.1a scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:51.642068+0000 osd.2 (osd.2) 139 : cluster [DBG] 11.1a scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:23.361229+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1114112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:24.361438+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1114112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:25.361613+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:54.568704+0000 osd.2 (osd.2) 140 : cluster [DBG] 3.11 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:54.582800+0000 osd.2 (osd.2) 141 : cluster [DBG] 3.11 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1105920 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 141) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:54.568704+0000 osd.2 (osd.2) 140 : cluster [DBG] 3.11 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:54.582800+0000 osd.2 (osd.2) 141 : cluster [DBG] 3.11 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:26.361811+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:55.596724+0000 osd.2 (osd.2) 142 : cluster [DBG] 11.1c scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:55.610844+0000 osd.2 (osd.2) 143 : cluster [DBG] 11.1c scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 1089536 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 143) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:55.596724+0000 osd.2 (osd.2) 142 : cluster [DBG] 11.1c scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:55.610844+0000 osd.2 (osd.2) 143 : cluster [DBG] 11.1c scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 772620 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:27.362090+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 1081344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:28.362234+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1064960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.886725426s of 12.919322014s, submitted: 8
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:29.362398+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:58.605284+0000 osd.2 (osd.2) 144 : cluster [DBG] 11.1b scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:30:58.619378+0000 osd.2 (osd.2) 145 : cluster [DBG] 11.1b scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1064960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 145) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:58.605284+0000 osd.2 (osd.2) 144 : cluster [DBG] 11.1b scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:30:58.619378+0000 osd.2 (osd.2) 145 : cluster [DBG] 11.1b scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:30.362561+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1056768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.18 deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.18 deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:31.362766+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:00.531863+0000 osd.2 (osd.2) 146 : cluster [DBG] 11.18 deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:00.546030+0000 osd.2 (osd.2) 147 : cluster [DBG] 11.18 deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1056768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 147) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:00.531863+0000 osd.2 (osd.2) 146 : cluster [DBG] 11.18 deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:00.546030+0000 osd.2 (osd.2) 147 : cluster [DBG] 11.18 deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 776067 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:32.362914+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:01.482562+0000 osd.2 (osd.2) 148 : cluster [DBG] 11.1e scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:01.496678+0000 osd.2 (osd.2) 149 : cluster [DBG] 11.1e scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 1032192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 149) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:01.482562+0000 osd.2 (osd.2) 148 : cluster [DBG] 11.1e scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:01.496678+0000 osd.2 (osd.2) 149 : cluster [DBG] 11.1e scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:33.363072+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 1032192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:34.363194+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 1024000 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:35.363411+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1015808 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:36.363571+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1015808 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 776067 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:37.363746+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1007616 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:38.363912+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1007616 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:39.364072+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 999424 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:40.364225+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 999424 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.050096512s of 12.069946289s, submitted: 6
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:41.364387+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:10.675349+0000 osd.2 (osd.2) 150 : cluster [DBG] 7.11 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:10.689393+0000 osd.2 (osd.2) 151 : cluster [DBG] 7.11 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 999424 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 151) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:10.675349+0000 osd.2 (osd.2) 150 : cluster [DBG] 7.11 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:10.689393+0000 osd.2 (osd.2) 151 : cluster [DBG] 7.11 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 777215 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:42.364595+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 991232 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:43.364731+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 991232 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:44.365261+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:13.702429+0000 osd.2 (osd.2) 152 : cluster [DBG] 3.16 deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:13.716500+0000 osd.2 (osd.2) 153 : cluster [DBG] 3.16 deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 983040 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 153) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:13.702429+0000 osd.2 (osd.2) 152 : cluster [DBG] 3.16 deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:13.716500+0000 osd.2 (osd.2) 153 : cluster [DBG] 3.16 deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:45.366053+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 974848 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:46.366236+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:15.766211+0000 osd.2 (osd.2) 154 : cluster [DBG] 8.1c scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:15.780355+0000 osd.2 (osd.2) 155 : cluster [DBG] 8.1c scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 974848 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 155) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:15.766211+0000 osd.2 (osd.2) 154 : cluster [DBG] 8.1c scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:15.780355+0000 osd.2 (osd.2) 155 : cluster [DBG] 8.1c scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 779511 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:47.366497+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 966656 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:48.366778+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:17.716397+0000 osd.2 (osd.2) 156 : cluster [DBG] 11.12 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:17.730507+0000 osd.2 (osd.2) 157 : cluster [DBG] 11.12 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 942080 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 157) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:17.716397+0000 osd.2 (osd.2) 156 : cluster [DBG] 11.12 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:17.730507+0000 osd.2 (osd.2) 157 : cluster [DBG] 11.12 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:49.367077+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 933888 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:50.367410+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 925696 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:51.367595+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 925696 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 780660 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:52.367755+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 917504 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:53.367897+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 917504 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.865617752s of 12.977084160s, submitted: 8
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:54.368058+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:23.652423+0000 osd.2 (osd.2) 158 : cluster [DBG] 7.15 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:23.666539+0000 osd.2 (osd.2) 159 : cluster [DBG] 7.15 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 917504 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 159) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:23.652423+0000 osd.2 (osd.2) 158 : cluster [DBG] 7.15 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:23.666539+0000 osd.2 (osd.2) 159 : cluster [DBG] 7.15 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:55.368251+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 909312 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:56.368422+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:25.696892+0000 osd.2 (osd.2) 160 : cluster [DBG] 11.1f scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:25.710973+0000 osd.2 (osd.2) 161 : cluster [DBG] 11.1f scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 909312 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782957 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 161) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:25.696892+0000 osd.2 (osd.2) 160 : cluster [DBG] 11.1f scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:25.710973+0000 osd.2 (osd.2) 161 : cluster [DBG] 11.1f scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:57.368805+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 901120 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 6.8 deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 6.8 deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:58.369050+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:27.668736+0000 osd.2 (osd.2) 162 : cluster [DBG] 6.8 deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:27.682807+0000 osd.2 (osd.2) 163 : cluster [DBG] 6.8 deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 892928 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 163) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:27.668736+0000 osd.2 (osd.2) 162 : cluster [DBG] 6.8 deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:27.682807+0000 osd.2 (osd.2) 163 : cluster [DBG] 6.8 deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:59.369240+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 892928 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:00.369444+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:01.369588+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.e deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.e deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 785251 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:02.369740+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:31.693506+0000 osd.2 (osd.2) 164 : cluster [DBG] 9.e deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:31.728965+0000 osd.2 (osd.2) 165 : cluster [DBG] 9.e deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 860160 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 165) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:31.693506+0000 osd.2 (osd.2) 164 : cluster [DBG] 9.e deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:31.728965+0000 osd.2 (osd.2) 165 : cluster [DBG] 9.e deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:03.369926+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 860160 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.987515450s of 10.023759842s, submitted: 8
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:04.370036+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:33.676091+0000 osd.2 (osd.2) 166 : cluster [DBG] 9.6 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:33.711451+0000 osd.2 (osd.2) 167 : cluster [DBG] 9.6 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 167) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:33.676091+0000 osd.2 (osd.2) 166 : cluster [DBG] 9.6 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:33.711451+0000 osd.2 (osd.2) 167 : cluster [DBG] 9.6 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:05.370262+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:06.370404+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 786398 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:07.370563+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 843776 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:08.370708+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:37.689925+0000 osd.2 (osd.2) 168 : cluster [DBG] 9.17 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:37.718182+0000 osd.2 (osd.2) 169 : cluster [DBG] 9.17 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 169) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:37.689925+0000 osd.2 (osd.2) 168 : cluster [DBG] 9.17 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:37.718182+0000 osd.2 (osd.2) 169 : cluster [DBG] 9.17 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 827392 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:09.370922+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 827392 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.f scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.f scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:10.371256+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:39.739060+0000 osd.2 (osd.2) 170 : cluster [DBG] 9.f scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:39.777826+0000 osd.2 (osd.2) 171 : cluster [DBG] 9.f scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 171) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:39.739060+0000 osd.2 (osd.2) 170 : cluster [DBG] 9.f scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:39.777826+0000 osd.2 (osd.2) 171 : cluster [DBG] 9.f scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 819200 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:11.371445+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 819200 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.7 deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.7 deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 789840 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:12.371566+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:41.773942+0000 osd.2 (osd.2) 172 : cluster [DBG] 9.7 deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:41.809549+0000 osd.2 (osd.2) 173 : cluster [DBG] 9.7 deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 173) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:41.773942+0000 osd.2 (osd.2) 172 : cluster [DBG] 9.7 deep-scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:41.809549+0000 osd.2 (osd.2) 173 : cluster [DBG] 9.7 deep-scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 811008 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:13.371824+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 811008 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:14.372039+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 802816 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.110040665s of 11.147427559s, submitted: 8
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:15.372214+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:44.823587+0000 osd.2 (osd.2) 174 : cluster [DBG] 9.18 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:44.855387+0000 osd.2 (osd.2) 175 : cluster [DBG] 9.18 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 786432 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 175) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:44.823587+0000 osd.2 (osd.2) 174 : cluster [DBG] 9.18 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:44.855387+0000 osd.2 (osd.2) 175 : cluster [DBG] 9.18 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:16.372373+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:45.865396+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.8 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:45.911367+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.8 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 778240 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 177) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:45.865396+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.8 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:45.911367+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.8 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 792135 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:17.372517+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 778240 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.c scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.c scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:18.372673+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:47.918770+0000 osd.2 (osd.2) 178 : cluster [DBG] 9.c scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:47.950555+0000 osd.2 (osd.2) 179 : cluster [DBG] 9.c scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 770048 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 179) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:47.918770+0000 osd.2 (osd.2) 178 : cluster [DBG] 9.c scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:47.950555+0000 osd.2 (osd.2) 179 : cluster [DBG] 9.c scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:19.373005+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:20.373376+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:49.946942+0000 osd.2 (osd.2) 180 : cluster [DBG] 6.f scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:49.971567+0000 osd.2 (osd.2) 181 : cluster [DBG] 6.f scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 181) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:49.946942+0000 osd.2 (osd.2) 180 : cluster [DBG] 6.f scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:49.971567+0000 osd.2 (osd.2) 181 : cluster [DBG] 6.f scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:21.373512+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 794429 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:22.373695+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:23.373911+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 737280 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:24.374024+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 737280 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:25.374202+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 737280 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.128871918s of 11.158221245s, submitted: 8
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:26.374371+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:55.981931+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.13 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:56.013747+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.13 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 183) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:55.981931+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.13 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:56.013747+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.13 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:27.374561+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 795577 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 720896 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:28.374799+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 720896 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:29.374913+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:59.004520+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.19 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  will send 2025-10-11T04:31:59.057445+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.19 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 712704 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client handle_log_ack log(last 185) v1
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:59.004520+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.19 scrub starts
Oct 11 05:00:49 compute-0 ceph-osd[89565]: log_client  logged 2025-10-11T04:31:59.057445+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.19 scrub ok
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:30.375058+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 712704 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:31.375178+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 704512 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:32.375311+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 704512 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:33.375424+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 704512 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:34.375600+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:35.375861+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:36.376004+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:37.376186+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 688128 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:38.376406+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 688128 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:39.376615+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 671744 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:40.376781+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 671744 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:41.376924+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 663552 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:42.377075+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 655360 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:43.377315+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 655360 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:44.377538+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:45.377779+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:46.377934+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:47.378115+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:48.378298+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 638976 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:49.378601+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 638976 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:50.378892+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:51.379124+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:52.379302+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:53.379469+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:54.379622+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:55.379821+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 614400 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:56.379946+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 614400 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:57.380165+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:58.380424+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:59.380583+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:00.380796+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 598016 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:01.380983+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 598016 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:02.381157+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:03.381310+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:04.381460+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:05.381755+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:06.381882+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:07.382034+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:08.382182+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 565248 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:09.382309+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 565248 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:10.382433+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 557056 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:11.382578+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 557056 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:12.382765+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:13.383028+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:14.383129+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:15.383311+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:16.383460+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:17.383628+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:18.383747+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:19.383895+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:20.384033+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:21.384216+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 524288 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:22.384447+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 524288 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:23.384621+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 524288 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:24.384756+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:25.384955+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:26.385127+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:27.385305+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:28.385488+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:29.385651+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:30.385804+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:31.386011+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:32.386203+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:33.386431+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:34.386824+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 491520 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:35.387053+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 491520 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:36.387231+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 483328 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:37.387434+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 483328 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:38.387677+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 483328 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:39.387834+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:40.388092+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:41.388284+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:42.388419+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:43.388598+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:44.388771+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 458752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:45.388962+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 458752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:46.389085+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 458752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:47.389267+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 450560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:48.389459+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 450560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:49.389604+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 450560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:50.389783+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:51.389945+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:52.390123+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:53.390266+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 434176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:54.390393+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 434176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:55.390609+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 425984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:56.390809+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 425984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:57.390967+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 417792 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:58.391087+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 417792 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:59.391221+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 417792 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:00.391390+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:01.391540+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:02.391830+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:03.392018+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:04.392207+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:05.392398+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 393216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:06.392626+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 393216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:07.392807+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 393216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:08.392965+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 385024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:09.393164+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 385024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:10.393378+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:11.393519+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:12.393640+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:13.393807+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:14.394039+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:15.394249+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:16.394422+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:17.394560+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:18.394677+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 352256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:19.394817+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 352256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:20.394945+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 352256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:21.395078+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 344064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:22.395240+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 344064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:23.395428+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:24.395592+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:25.395790+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:26.395955+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 327680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:27.396111+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 327680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:28.396234+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 327680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:29.396404+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:30.396525+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:31.396656+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:32.396843+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:33.397023+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 311296 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:34.397279+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 311296 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:35.397517+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 303104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:36.397658+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 303104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:37.397799+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:38.397983+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:39.398116+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:40.398245+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 286720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:41.398399+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 286720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:42.398582+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 286720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:43.398781+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 278528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:44.398906+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 278528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:45.399118+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 270336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:46.399274+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 270336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:47.399409+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 262144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:48.399551+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 253952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:49.399674+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 253952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:50.399859+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 253952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:51.400008+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 245760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:52.400151+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 245760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:53.400388+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:54.400530+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:55.400758+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:56.401681+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 229376 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:57.401880+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:58.402117+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:59.402308+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 204800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:00.402543+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 204800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:01.402745+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 196608 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:02.402898+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 196608 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:03.403066+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 196608 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:04.403272+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 188416 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:05.403517+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 188416 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:06.403691+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 188416 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:07.403950+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:08.404135+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:09.404273+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:10.405465+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:11.405625+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:12.405789+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:13.405991+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:14.406131+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:15.406324+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:16.406529+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:17.406706+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 147456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:18.406856+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 147456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:19.407055+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 139264 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:20.407320+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 139264 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:21.407521+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:22.407633+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:23.407785+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:24.407931+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:25.408185+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:26.408391+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 147456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:27.408513+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 147456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:28.408635+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 139264 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:29.408787+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 139264 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:30.408963+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 139264 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:31.409105+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:32.409262+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:33.409390+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:34.409561+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 122880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:35.409708+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 122880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:36.409958+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 114688 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:37.410156+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 114688 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:38.410301+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 114688 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:39.410396+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 106496 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:40.410542+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 106496 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:41.410665+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 106496 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:42.410807+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 98304 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:43.411021+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 98304 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:44.411164+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 90112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:45.411371+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 90112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:46.411580+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 90112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:47.411733+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 90112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:48.411873+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 90112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:49.411985+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 81920 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:50.412134+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 81920 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:51.412297+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 81920 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:52.412481+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 73728 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:53.412627+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 73728 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:54.412735+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 73728 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:55.412918+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 65536 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:56.413070+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 65536 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:57.413322+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 57344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:58.413511+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 57344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:59.413667+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 57344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:00.413808+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:01.414052+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:02.414162+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 40960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:03.414284+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 32768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:04.414421+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 32768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:05.414592+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:06.414732+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:07.414921+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:08.415106+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:09.415253+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:10.415455+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:11.415597+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:12.415741+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:13.415886+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:14.416028+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:15.416296+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:16.416508+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:17.416624+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 0 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:18.416740+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 0 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:19.416901+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 0 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:20.417012+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:21.417162+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:22.417313+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:23.417518+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:24.417675+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:25.417891+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:26.418054+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:27.418190+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:28.418418+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:29.418582+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:30.418714+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:31.418834+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:32.418982+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:33.419189+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:34.419372+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:35.419514+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:36.419631+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:37.419759+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:38.419897+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:39.420064+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:40.420229+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:41.420394+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:42.420567+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:43.420726+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:44.420872+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:45.421083+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 958464 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:46.422699+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 958464 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:47.422830+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 958464 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:48.422938+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:49.423086+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:50.423211+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:51.423369+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:52.423488+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:53.423637+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:54.423862+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:55.423998+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:56.424148+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:57.424313+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:58.424577+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:59.424745+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:00.424894+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:01.425031+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:02.425198+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:03.425368+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:04.425527+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:05.425731+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:06.425885+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:07.426078+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:08.426223+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:09.426388+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:10.426535+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:11.426687+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:12.426800+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:13.426942+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 835584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:14.427121+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 835584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:15.427380+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 827392 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:16.427502+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 827392 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:17.427685+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:18.427844+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:19.427968+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:20.428142+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 811008 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:21.428385+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 811008 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:22.428540+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 811008 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:23.428709+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:24.428859+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:25.429053+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 794624 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:26.429201+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:27.429347+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:28.429468+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:29.429582+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:30.429692+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5376 writes, 23K keys, 5376 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5376 writes, 765 syncs, 7.03 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5376 writes, 23K keys, 5376 commit groups, 1.0 writes per commit group, ingest: 18.24 MB, 0.03 MB/s
                                           Interval WAL: 5376 writes, 765 syncs, 7.03 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:31.429848+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 720896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:32.430024+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:33.430201+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:34.430456+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:35.430724+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:36.430916+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:37.431046+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:38.431185+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:39.431382+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 688128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:40.431534+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 688128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:41.431815+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:42.432053+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:43.432288+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:44.432570+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:45.432926+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:46.433150+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:47.433426+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:48.433622+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:49.433848+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 655360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:50.434071+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 655360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:51.434453+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:52.434709+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:53.434961+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:54.435200+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:55.435416+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:56.435663+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:57.435923+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:58.436219+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:59.436501+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:00.436791+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:01.437074+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:02.437260+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:03.438713+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:04.438952+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:05.439267+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 614400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:06.439524+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 606208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:07.439712+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 606208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:08.439961+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 598016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:09.440138+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 598016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:10.440311+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:11.440493+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:12.440656+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:13.440793+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 581632 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:14.441055+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 581632 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:15.441222+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:16.441412+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:17.441727+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:18.441875+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:19.442107+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:20.442255+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:21.442407+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 557056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:22.442609+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 557056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:23.442784+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:24.442905+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:25.443121+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 360.115478516s of 360.131134033s, submitted: 4
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:26.443250+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 352256 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:27.443406+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:28.443574+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:29.443724+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:30.443891+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:31.444045+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:32.444222+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:33.444381+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:34.444529+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:35.444736+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:36.444926+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:37.445161+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:38.445383+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:39.445514+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:40.445649+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:41.445834+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:42.445990+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:43.446159+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:44.446318+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:45.446528+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:46.446661+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:47.446784+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:48.446885+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:49.446991+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:50.447104+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:51.447212+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:52.447349+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:53.447489+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:54.447670+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:55.447903+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:56.448118+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:57.448301+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:58.448421+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:59.448548+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:00.448704+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:01.448873+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:02.449036+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 163840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:03.449223+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 163840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:04.449371+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:05.449521+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:06.449650+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:07.449816+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:08.449981+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:09.450186+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:10.450380+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:11.450556+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:12.450730+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 131072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:13.450898+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 131072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:14.451057+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:15.451260+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:16.451411+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:17.451543+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:18.451686+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:19.451816+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:20.451953+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:21.452106+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:22.452272+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:23.452419+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:24.452654+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:25.452885+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:26.453052+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:27.453168+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:28.453282+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:29.453424+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:30.453574+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:31.453709+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:32.453820+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:33.453939+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:34.454054+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:35.454213+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:36.454367+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:37.454558+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:38.454685+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:39.454843+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:40.455013+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:41.455196+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:42.455411+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:43.455563+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:44.455728+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:45.455928+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:46.456097+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:47.456292+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:48.456495+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:49.456669+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:50.456872+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:51.457099+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:52.457268+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:53.457412+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:54.457632+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:55.457848+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:56.457988+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:57.458157+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:58.458410+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:59.458621+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:00.458774+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:01.458933+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:02.459096+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:03.459239+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:04.459400+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:05.459561+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:06.460152+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:07.460289+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:08.460461+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:09.460667+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:10.460853+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:11.461016+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:12.461145+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:13.461354+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:14.461597+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:15.461745+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:16.461881+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:17.462399+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:18.462613+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:19.462734+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:20.462899+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:21.463035+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:22.463170+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:23.463351+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:24.463461+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:25.463635+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:26.463769+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:27.464056+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:28.464494+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:29.464795+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:30.464987+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:31.465162+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:32.465374+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:33.465526+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:34.465852+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:35.466045+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:36.466199+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:37.466466+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:38.466616+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:39.466788+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:40.466919+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:41.467072+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:42.467256+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:43.467409+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:44.467584+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:45.467759+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:46.467949+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:47.468107+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:48.468233+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:49.468539+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:50.468985+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:51.469166+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:52.469372+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:53.469520+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:54.469816+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:55.469988+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:56.470205+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:57.470406+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:58.470641+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:59.470854+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:00.471091+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:01.471244+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:02.471411+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:03.471560+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:04.471666+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:05.471823+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:06.471986+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:07.473399+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:08.473521+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:09.473659+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:10.473792+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:11.473952+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:12.474125+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:13.474303+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:14.474486+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:15.474631+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:16.474763+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:17.474969+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:18.475089+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:19.475311+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:20.475564+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:21.475692+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:22.475888+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:23.476039+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:24.476194+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:25.476369+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:26.476509+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:27.476637+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:28.476727+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:29.476850+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:30.476965+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:31.477080+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:32.477204+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:33.477398+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:34.477521+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:35.477668+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:36.477805+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:37.477997+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:38.478122+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:39.478272+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:40.478392+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:41.478515+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:42.478685+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:43.478858+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:44.479076+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:45.479312+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:46.479449+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:47.479605+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:48.479751+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:49.479916+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:50.480087+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:51.480279+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:52.480408+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:53.480570+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:54.480833+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:55.481401+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:56.485024+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:57.485267+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:58.485415+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:59.485546+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:00.485678+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:01.485838+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:02.486089+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:03.486270+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:04.486418+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:05.486608+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:06.486730+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:07.486834+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:08.486948+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:09.487182+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:10.487308+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:11.487399+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:12.487548+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:13.487689+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:14.487801+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:15.487957+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:16.488104+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:17.488392+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:18.488584+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:19.488745+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:20.488911+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:21.489096+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:22.489235+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:23.489371+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:24.489477+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:25.489630+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:26.489776+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:27.489930+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:28.490086+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:29.490242+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:30.490422+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:31.490565+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:32.490707+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:33.490837+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:34.490990+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:35.491205+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:36.491386+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:37.491517+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:38.491601+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:39.491781+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:40.491921+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:41.492035+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:42.492173+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:43.492309+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:44.492524+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:45.492715+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:46.492856+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:47.493180+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:48.493503+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:49.493651+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:50.493806+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:51.493940+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:52.494120+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:53.494286+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:54.494441+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:55.494626+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:56.494739+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:57.494862+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:58.494990+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:59.495135+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:00.495408+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:01.495783+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:02.496004+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:03.496191+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:04.496372+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:05.496579+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:06.496828+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:07.497042+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:08.497248+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:09.497472+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:10.497722+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:11.497844+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:12.497983+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:13.498164+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:14.498399+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:15.498621+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:16.498757+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:17.498864+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:18.498963+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:19.499096+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:20.499231+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:21.499357+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:22.499515+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:23.499654+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:24.499821+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:25.500030+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:26.500172+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:27.500426+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:28.500586+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:29.500775+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:30.500936+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:31.501098+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:32.501252+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:33.501413+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:34.501605+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:35.501808+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:36.501998+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:37.502144+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:38.502320+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:39.502483+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:40.502613+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:41.502695+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:42.502826+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:43.502915+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:44.503134+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:45.503304+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:46.503385+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:47.503534+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:48.503679+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:49.503806+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:50.503950+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:51.504112+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:52.504251+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:53.504425+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:54.504563+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:55.504759+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:56.504953+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:57.505089+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:58.505235+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:49 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:59.505387+0000)
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:49 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:49 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:49 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:49 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15093 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:00.505540+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:01.505702+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:02.505817+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:03.505969+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:04.506151+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:05.506380+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:06.506527+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:07.506676+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:08.506809+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:09.506985+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:10.507126+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:11.507262+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:12.507414+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:13.507545+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:14.507659+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:15.507825+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:16.507967+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:17.508115+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:18.508263+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:19.508439+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:20.508579+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:21.508733+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:22.508874+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:23.509050+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:24.509201+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:25.509356+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:26.509470+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:27.509610+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:28.509724+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:29.509871+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:30.510038+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:31.510162+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:32.510280+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:33.510404+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:34.510553+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:35.510708+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:36.510844+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:37.510975+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:38.511188+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:39.511342+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:40.511473+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:41.511611+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:42.511774+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:43.511930+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:44.512079+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:45.512172+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:46.512253+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:47.512383+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:48.512564+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:49.512684+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:50.512843+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:51.513225+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:52.513400+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:53.513609+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:54.513764+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:55.513948+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:56.514465+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:57.514604+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:58.514764+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:59.514922+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:00.515080+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:01.515228+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:02.515377+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:03.515511+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:04.515653+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:05.515848+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:06.516023+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:07.516168+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:08.516280+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:09.516417+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:10.516548+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:11.516670+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:12.516825+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:13.516935+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:14.517141+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:15.517439+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:16.517557+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:17.517676+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:18.517798+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:19.517915+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:20.518100+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:21.518252+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:22.518385+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:23.518506+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:24.518694+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:25.518864+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:26.519042+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:27.519168+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:28.519300+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:29.519444+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:30.519578+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:31.519728+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:32.519899+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:33.520031+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:34.520146+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:35.520296+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:36.520482+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:37.521144+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:38.521366+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:39.521493+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:40.521652+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:41.521792+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:42.521913+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:43.522034+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:44.522181+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:45.522419+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:46.522568+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:47.522702+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:48.522820+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:49.522933+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:50.523093+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:51.523248+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:52.523435+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:53.523778+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:54.523943+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:55.524177+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:56.524371+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:57.524498+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:58.524671+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:59.524794+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:00.524971+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:01.525140+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:02.525291+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:03.525386+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:04.525542+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:05.525723+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:06.525883+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:07.526038+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:08.526175+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:09.526325+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:10.526495+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:11.526657+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:12.530474+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:13.530634+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:14.530755+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:15.530926+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:16.531090+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:17.531213+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:18.531349+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:19.531458+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:20.531571+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:21.531705+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:22.531853+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:23.532410+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:24.532675+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:25.532882+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:26.533056+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:27.533777+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:28.534008+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:29.534429+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:30.534622+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:31.534837+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:32.535020+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:33.535231+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:34.535413+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:35.535697+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:36.536086+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:37.536284+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:38.536477+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:39.536600+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:40.536785+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:41.536969+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:42.537178+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:43.537468+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:44.537972+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:45.538593+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:46.538724+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:47.538906+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:48.539594+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:49.539874+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:50.540230+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:51.540745+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:52.541099+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:53.541324+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:54.541599+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:55.541912+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:56.542579+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:57.543292+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:58.543727+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:59.544179+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:00.544662+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:01.545081+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:02.545282+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:03.545675+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:04.546539+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:05.547455+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:06.547684+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:07.548028+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:08.548179+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:09.548369+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:10.548525+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:11.548646+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:12.549118+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:13.549399+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:14.549584+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:15.549843+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:16.550030+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:17.551414+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:18.551594+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:19.552395+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:20.552533+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:21.552937+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:22.554280+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:23.554482+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:24.555466+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:25.555703+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:26.555859+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:27.556625+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:28.556820+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:29.557079+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:30.557422+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5556 writes, 23K keys, 5556 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5556 writes, 855 syncs, 6.50 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56328d19f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:31.557688+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:32.557885+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:33.558027+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:34.558204+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:35.558567+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:36.558876+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:37.559096+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:38.559414+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:39.559690+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:40.560046+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:41.560237+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:42.560416+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:43.560592+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:44.560750+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:45.560948+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:46.561098+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:47.561281+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:48.561549+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:49.561846+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:50.562033+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:51.562169+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:52.562360+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:53.562492+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:54.562622+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:55.562803+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:56.562925+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:57.563110+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:58.563296+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:59.563432+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:00.563649+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:01.563813+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:02.563986+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:03.564147+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:04.564425+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:05.564659+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:06.564888+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:07.565121+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:08.565486+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:09.565863+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:10.566141+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:11.566459+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:12.566801+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:13.566978+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:14.567226+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:15.567542+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:16.567912+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:17.568220+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:18.568544+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:19.568889+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:20.569145+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:21.569544+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:22.569682+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:23.569871+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:24.570068+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:25.570434+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.770324707s of 600.150451660s, submitted: 90
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:26.570624+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:27.570786+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:28.570939+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:29.571097+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:30.571535+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:31.571823+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:32.572009+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:33.572200+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:34.572388+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:35.572621+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:36.572772+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:37.573041+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:38.573440+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:39.573708+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:40.573983+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:41.574118+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:42.574420+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:43.574561+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:44.574708+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:45.574905+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:46.575123+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:47.575249+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:48.575866+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:49.576023+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:50.576224+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:51.576596+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:52.577025+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:53.578675+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:54.579793+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:55.580996+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:56.581145+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:57.583042+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:58.583218+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:59.583443+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:00.584018+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:01.584177+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:02.584357+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:03.584508+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:04.584641+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:05.585088+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:06.585426+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:07.585743+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:08.585983+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:09.586234+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:10.586425+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:11.586604+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:12.586789+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:13.586978+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:14.587308+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:15.587834+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:16.588112+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:17.588415+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:18.588668+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:19.588888+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:20.632587+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:21.632809+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:22.632993+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:23.633225+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:24.633424+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:25.633565+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:26.633714+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:27.633848+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:28.634002+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:29.634179+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:30.634370+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:31.634507+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:32.634686+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:33.634883+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:34.635053+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:35.635214+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:36.635452+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:37.635656+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:38.635798+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:39.635902+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:40.636064+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:41.636205+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:42.636373+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:43.636514+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:44.636676+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:45.636928+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:46.637137+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:47.637308+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:48.637477+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:49.637659+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:50.637841+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:51.637989+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:52.638156+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:53.638321+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:54.638547+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:55.638710+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:56.638884+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:57.642158+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:58.643420+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:59.643723+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:00.644616+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:01.645202+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:02.645887+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:03.646386+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:04.646572+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:05.646810+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:06.647027+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:07.647161+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:08.647285+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:09.647396+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:10.647651+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:11.647813+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:12.648034+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:13.648166+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:14.648304+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:15.648594+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:16.648766+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:17.648947+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:18.649150+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:19.649450+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:20.649662+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:21.649847+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:22.650014+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:23.650226+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:24.650427+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:25.650802+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:26.650975+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:27.651223+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:28.651418+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:29.651631+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:30.651816+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:31.651965+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:32.652148+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:33.652368+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:34.652536+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:35.652719+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:36.652988+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:37.653150+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:38.653374+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:39.653549+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:40.653718+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:41.653898+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:42.654081+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:43.654308+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:44.654474+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:45.654654+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:46.654810+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:47.654967+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:48.655110+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:49.655288+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:50.655473+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:51.655635+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:52.655790+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:53.656029+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:54.656214+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:55.656418+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:56.656586+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:57.656835+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:58.657007+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:59.657253+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:00.657422+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:01.657601+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:02.657780+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:03.659459+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:04.659661+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:05.660543+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:06.660692+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:07.661659+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:08.661846+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:09.662630+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:10.662833+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:11.663090+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:12.663259+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:13.663511+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:14.663660+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:15.663911+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:16.664604+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:17.664725+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:18.664846+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:19.664994+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:20.665183+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:21.665313+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:22.665610+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:23.665744+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:24.666058+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:25.666368+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:26.666610+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:27.666776+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:28.667013+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:29.667256+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:30.667533+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:31.667723+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:32.667966+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:33.668145+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:34.668369+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:35.668622+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:36.668804+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:37.668987+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:38.669165+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:39.669388+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:40.669581+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:41.669754+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:42.669923+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:43.670047+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:44.670178+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:45.670444+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:46.670584+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:47.670756+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:48.670948+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:49.671122+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:50.671299+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:51.671440+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:52.671601+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:53.671784+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:54.671954+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:55.672106+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:56.672250+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:57.672446+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:58.672620+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:59.672841+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:00.673024+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:01.673238+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:02.673447+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:03.673666+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:04.673903+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:05.674176+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:06.674403+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:07.674597+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:08.674769+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:09.674930+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:10.675077+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:11.675248+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:12.675409+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:13.675552+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:14.675746+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:15.675975+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:16.676165+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:17.676391+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:18.676607+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:19.676823+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:20.677038+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:21.677195+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796725 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:22.677389+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1564672 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:23.677563+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1564672 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:24.677682+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xacd76/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1564672 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:25.678133+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 239.392028809s of 239.705917358s, submitted: 90
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:26.678281+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 800899 data_alloc: 218103808 data_used: 167936
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 115 handle_osd_map epochs [115,116], i have 115, src has [1,116]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f51e400
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 17235968 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:27.678384+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcac3000/0x0/0x4ffc00000, data 0xb04c4/0x159000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 116 ms_handle_reset con 0x56328f51e400 session 0x56328eb29860
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 17227776 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328e23cc00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 116 handle_osd_map epochs [117,117], i have 116, src has [1,117]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:28.678502+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 17072128 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:29.678671+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 118 ms_handle_reset con 0x56328e23cc00 session 0x56328eb29a40
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 17063936 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:30.678880+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fb64d000/0x0/0x4ffc00000, data 0x1523bf6/0x15cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fb64d000/0x0/0x4ffc00000, data 0x1523bf6/0x15cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 17047552 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:31.679052+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954436 data_alloc: 218103808 data_used: 176128
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 17047552 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:32.679244+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 17047552 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:33.679433+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fb64d000/0x0/0x4ffc00000, data 0x1523bf6/0x15cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 17047552 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:34.679624+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 17047552 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:35.680078+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 17047552 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:36.680250+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954436 data_alloc: 218103808 data_used: 176128
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.992850304s of 11.152420044s, submitted: 32
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:37.680445+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:38.680624+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:39.680758+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:40.680948+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:41.681165+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956738 data_alloc: 218103808 data_used: 176128
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:42.681275+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:43.681431+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:44.681590+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:45.681818+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:46.682015+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956738 data_alloc: 218103808 data_used: 176128
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:47.682137+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:48.682305+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:49.682534+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:50.682666+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:51.682827+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956738 data_alloc: 218103808 data_used: 176128
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:52.682943+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:53.683061+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:54.683213+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:55.683391+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:56.683538+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956738 data_alloc: 218103808 data_used: 176128
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:57.683707+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:58.683866+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:59.684034+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:00.684169+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 17031168 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:01.684366+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956738 data_alloc: 218103808 data_used: 176128
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 17022976 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:02.684527+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x1525659/0x15d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 17022976 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:03.684707+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328ea89400
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.131505966s of 27.142993927s, submitted: 13
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 120 ms_handle_reset con 0x56328ea89400 session 0x56328ec57680
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 16998400 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:04.684862+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328e23cc00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 16883712 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 121 ms_handle_reset con 0x56328e23cc00 session 0x5632912e41e0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:05.685033+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f51e400
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 15826944 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 122 ms_handle_reset con 0x56328f51e400 session 0x5632912e5a40
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:06.685204+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977948 data_alloc: 218103808 data_used: 184320
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f51f000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 15777792 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 123 ms_handle_reset con 0x56328f51f000 session 0x5632912e5e00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:07.685414+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fb639000/0x0/0x4ffc00000, data 0x152ccd4/0x15e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f807800
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 15769600 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 124 ms_handle_reset con 0x56328f807800 session 0x56328f2a14a0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:08.685585+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328ea89000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 15769600 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 125 ms_handle_reset con 0x56328ea89000 session 0x56329129f4a0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:09.685804+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 15933440 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:10.685957+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 15933440 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:11.686285+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993088 data_alloc: 218103808 data_used: 188416
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fb62f000/0x0/0x4ffc00000, data 0x153098d/0x15ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 15933440 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:12.686487+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328e23cc00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f51e400
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 15925248 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:13.686732+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 126 ms_handle_reset con 0x56328e23cc00 session 0x56328ec5c960
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 126 ms_handle_reset con 0x56328f51e400 session 0x56328eb285a0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb62d000/0x0/0x4ffc00000, data 0x153253b/0x15f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 15892480 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f51f000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:14.686872+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.163709641s of 10.455937386s, submitted: 71
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fb630000/0x0/0x4ffc00000, data 0x1531f9f/0x15ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 127 ms_handle_reset con 0x56328f51f000 session 0x56328f8c83c0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 15867904 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f807800
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:15.687057+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563290946000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fb630000/0x0/0x4ffc00000, data 0x1531f9f/0x15ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 128 ms_handle_reset con 0x56328f807800 session 0x56328e6cd4a0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 128 ms_handle_reset con 0x563290946000 session 0x5632912e5a40
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 15818752 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:16.687284+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328e23cc00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f51f000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006227 data_alloc: 218103808 data_used: 212992
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f807800
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 129 ms_handle_reset con 0x56328f807800 session 0x56328fad45a0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 129 ms_handle_reset con 0x56328e23cc00 session 0x56328ea812c0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 24215552 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:17.687448+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f327000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d1400
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 21643264 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 130 ms_handle_reset con 0x56328f327000 session 0x56328eb29a40
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:18.687619+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 131 ms_handle_reset con 0x5632912d1400 session 0x56328f9b0780
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 131 ms_handle_reset con 0x56328f51f000 session 0x56328f2a0d20
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393c00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 21446656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 131 ms_handle_reset con 0x563291393c00 session 0x56328f579a40
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 131 ms_handle_reset con 0x5632912d0000 session 0x56328ea7b4a0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393800
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 131 ms_handle_reset con 0x563291393800 session 0x5632912e5a40
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:19.687786+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 131 heartbeat osd_stat(store_statfs(0x4f8e23000/0x0/0x4ffc00000, data 0x3d3a673/0x3dfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328e23cc00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f327000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 21405696 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:20.687954+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 133 ms_handle_reset con 0x56328f327000 session 0x5632908f01e0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 20455424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 133 ms_handle_reset con 0x56328e23cc00 session 0x5632913a8000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:21.688115+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f327000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023189 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 134 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 134 ms_handle_reset con 0x56328f327000 session 0x56328ec650e0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 20389888 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:22.698765+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 20381696 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 135 ms_handle_reset con 0x5632912d0000 session 0x5632908f01e0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:23.698932+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 20381696 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:24.699088+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.757702827s of 10.382859230s, submitted: 305
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 20373504 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:25.699293+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x1542b8e/0x1605000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393800
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 20299776 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 137 ms_handle_reset con 0x563291393800 session 0x56328e72ba40
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:26.699568+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043277 data_alloc: 218103808 data_used: 262144
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 20291584 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:27.699788+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 20291584 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:28.700030+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fb611000/0x0/0x4ffc00000, data 0x1544bad/0x160b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393c00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 20258816 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:29.700227+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 138 ms_handle_reset con 0x563291393c00 session 0x563290a54d20
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb60e000/0x0/0x4ffc00000, data 0x1546796/0x160f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 20242432 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f807800
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:30.700368+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 140 ms_handle_reset con 0x56328f807800 session 0x56328f5785a0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 20250624 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f327000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 140 ms_handle_reset con 0x56328f327000 session 0x56328f9b0b40
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:31.700497+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393800
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053869 data_alloc: 218103808 data_used: 290816
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 20234240 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:32.700646+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fb609000/0x0/0x4ffc00000, data 0x1549e3a/0x1614000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 142 ms_handle_reset con 0x563291393800 session 0x5632912e5a40
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 142 ms_handle_reset con 0x5632912d0000 session 0x56328f8c9c20
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393c00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 142 ms_handle_reset con 0x563291393c00 session 0x5632908e8d20
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 19136512 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:33.700796+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fb601000/0x0/0x4ffc00000, data 0x154d5c0/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 19095552 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:34.700987+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 19095552 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:35.701170+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 19095552 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:36.701368+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062658 data_alloc: 218103808 data_used: 290816
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 19095552 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:37.701542+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fb601000/0x0/0x4ffc00000, data 0x154d5c0/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 19095552 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:38.701662+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 19095552 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:39.701790+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fb601000/0x0/0x4ffc00000, data 0x154d5c0/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.699498177s of 15.214076996s, submitted: 135
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 19079168 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:40.701951+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb600000/0x0/0x4ffc00000, data 0x154f05b/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 19079168 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:41.702125+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064288 data_alloc: 218103808 data_used: 290816
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 19079168 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:42.702271+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632909ea400
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 19079168 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:43.702401+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 143 handle_osd_map epochs [144,144], i have 144, src has [1,144]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb600000/0x0/0x4ffc00000, data 0x154f05b/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 19070976 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:44.702564+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 145 ms_handle_reset con 0x5632909ea400 session 0x56328f8a5a40
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f327000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 19062784 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:45.702721+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 19062784 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:46.702852+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074589 data_alloc: 218103808 data_used: 294912
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fb5f8000/0x0/0x4ffc00000, data 0x15527bc/0x1624000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 19062784 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:47.702968+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fb5f8000/0x0/0x4ffc00000, data 0x15527bc/0x1624000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 19062784 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:48.703156+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393800
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 19152896 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:49.703317+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393c00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 146 ms_handle_reset con 0x563291393800 session 0x5632908a2f00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 19144704 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:50.703490+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.668182373s of 10.810811996s, submitted: 66
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 147 ms_handle_reset con 0x563291393c00 session 0x5632908c8f00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 19111936 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:51.703604+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086092 data_alloc: 218103808 data_used: 303104
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912ae000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 147 ms_handle_reset con 0x5632912ae000 session 0x56328ec57e00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 18006016 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:52.703752+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912ae400
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fb5f1000/0x0/0x4ffc00000, data 0x1555f3a/0x162d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 147 ms_handle_reset con 0x5632912ae400 session 0x5632913a8d20
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912ae800
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 19062784 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:53.703897+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 148 ms_handle_reset con 0x5632912ae800 session 0x5632913a83c0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 19046400 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:54.704033+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912ae000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 19046400 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:55.704216+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fb5ef000/0x0/0x4ffc00000, data 0x1557aed/0x162f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 149 ms_handle_reset con 0x5632912ae000 session 0x563290a545a0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 19038208 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:56.704419+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912ae400
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091315 data_alloc: 218103808 data_used: 323584
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 19038208 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:57.704694+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 149 ms_handle_reset con 0x5632912ae400 session 0x56328f2a0d20
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 19038208 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:58.704861+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 19038208 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:59.705008+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fb5e8000/0x0/0x4ffc00000, data 0x155b270/0x1634000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 19030016 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:00.705476+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 19005440 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:01.705589+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393800
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x563291393800 session 0x56329124b4a0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393c00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x563291393c00 session 0x56329124ab40
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912afc00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x5632912afc00 session 0x56328f9b10e0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097759 data_alloc: 218103808 data_used: 323584
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912afc00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x5632912afc00 session 0x56328ea7b4a0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912ae000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912ae400
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.902799606s of 11.212671280s, submitted: 101
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x5632912ae400 session 0x56328eb294a0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x5632912ae000 session 0x56328ec5cf00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0400
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x5632912d0400 session 0x56328ea7da40
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393400
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x563291393400 session 0x5632913ac960
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0400
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x5632912d0400 session 0x5632913ad4a0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d1400
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x5632912d1400 session 0x5632913ac3c0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 18989056 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:02.705773+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:03.705918+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 18989056 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0c00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x5632912d0c00 session 0x5632913ac1e0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:04.706056+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 18956288 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393400
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393800
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fb5e5000/0x0/0x4ffc00000, data 0x155cd5c/0x1639000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:05.706313+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 18948096 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:06.706548+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 18948096 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101583 data_alloc: 218103808 data_used: 335872
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:07.706656+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 18948096 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393c00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 151 ms_handle_reset con 0x563291393c00 session 0x5632912e5a40
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56329054ec00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 152 ms_handle_reset con 0x56329054ec00 session 0x56328f5785a0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0400
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0c00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 152 ms_handle_reset con 0x5632912d0c00 session 0x56328f8c83c0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 152 ms_handle_reset con 0x5632912d0400 session 0x5632908f1680
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:08.706757+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d1400
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 18948096 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 152 ms_handle_reset con 0x5632912d1400 session 0x56328ec650e0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x563291393c00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 153 ms_handle_reset con 0x563291393c00 session 0x5632908f01e0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:09.706892+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 18939904 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb5dc000/0x0/0x4ffc00000, data 0x15608aa/0x1640000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:10.707029+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 18907136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:11.707160+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 18907136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115013 data_alloc: 218103808 data_used: 348160
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x1562443/0x1643000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56329054e000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:12.707320+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 18890752 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.489326477s of 10.598018646s, submitted: 33
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 154 ms_handle_reset con 0x56329054e000 session 0x5632913ad4a0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56329054e000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:13.707560+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 18866176 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 155 ms_handle_reset con 0x56329054e000 session 0x56328ec5cf00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:14.707685+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 18857984 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 155 ms_handle_reset con 0x563291393400 session 0x56328f8c9e00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 155 ms_handle_reset con 0x563291393800 session 0x5632906bfc20
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0400
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:15.707924+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 18857984 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x1565691/0x1646000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [1])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 157 ms_handle_reset con 0x5632912d0400 session 0x56328f2a0d20
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:16.708454+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 18825216 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118681 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:17.708610+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 18825216 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:18.709321+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 18825216 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:19.709863+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 18825216 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 158 ms_handle_reset con 0x56328f327000 session 0x56328ec5cb40
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 158 ms_handle_reset con 0x5632912d0000 session 0x56328f9b0780
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x1567220/0x1648000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56328f327000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:20.710008+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 158 ms_handle_reset con 0x56328f327000 session 0x56328f8c7a40
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 18792448 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:21.710520+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 18792448 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120799 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x56329054e000
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 158 ms_handle_reset con 0x56329054e000 session 0x56329129e1e0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: handle_auth_request added challenge on 0x5632912d0400
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:22.710687+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 18767872 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:23.711298+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _renew_subs
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 18767872 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.580225945s of 10.899756432s, submitted: 155
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 159 ms_handle_reset con 0x5632912d0400 session 0x56329129e3c0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:24.711611+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 18759680 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:25.711865+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x156a831/0x164d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 18759680 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:26.712066+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 18759680 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122643 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:27.712257+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 18759680 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:28.712613+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 18759680 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:29.712905+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 18759680 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:30.713060+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 18759680 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:31.713302+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x156a831/0x164d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:32.713512+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:33.713703+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:34.713982+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:35.714262+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:36.714500+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:37.714729+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:38.714923+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:39.715116+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:40.715261+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:41.715377+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:42.715517+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:43.715651+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:44.715797+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:45.715996+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:46.716215+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:47.716386+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:48.716534+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:49.716649+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:50.716804+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:51.716965+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:52.717118+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:53.717263+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:54.717412+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:55.717605+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:56.717787+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:57.717901+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:58.718064+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:59.718204+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:00.718403+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:01.718563+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:02.718702+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:03.718844+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:04.719022+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:05.719217+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:06.719392+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:07.719546+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:08.719768+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:09.719949+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:10.720123+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:11.720287+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:12.720465+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:13.720612+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:14.720800+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:15.720997+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:16.721164+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:17.721381+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:18.721559+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:19.721733+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:20.721960+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:21.722436+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:22.722682+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:23.722996+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:24.723189+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:25.723462+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 17702912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:26.723674+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 17694720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:27.723851+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 17694720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:28.724028+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 17694720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:29.724249+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 17694720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:30.724422+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 17694720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:31.724563+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 17694720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:32.724735+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 17694720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:33.724876+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 17694720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:34.725015+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'config diff' '{prefix=config diff}'
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'config show' '{prefix=config show}'
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 17448960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'counter dump' '{prefix=counter dump}'
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'counter schema' '{prefix=counter schema}'
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:35.725160+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 16941056 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:36.725358+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 16900096 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'log dump' '{prefix=log dump}'
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:37.725473+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'perf dump' '{prefix=perf dump}'
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 16826368 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'perf schema' '{prefix=perf schema}'
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:38.725591+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:39.725704+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:40.725815+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:41.725995+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:42.726135+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:43.726260+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:44.726384+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:45.726514+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:46.727385+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:47.727577+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:48.727755+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:49.727877+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:50.728003+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:51.728123+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:52.728248+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:53.728408+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:54.728526+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:55.728644+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:56.728772+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:57.728917+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:58.729030+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:59.729172+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:00.729286+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:01.729499+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:02.729635+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:03.729756+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:04.729879+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:05.730056+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:06.730164+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:07.731017+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:08.731157+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:09.731412+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:10.731562+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:11.731751+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:12.731909+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:13.732027+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:14.732181+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:15.732432+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:16.732558+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:17.732696+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:18.732828+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:19.732942+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:20.733070+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:21.733210+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:22.733358+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 16859136 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:23.733485+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:24.733618+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:25.733777+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:26.733956+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:27.734153+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:28.734365+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:29.734540+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:30.734819+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:31.735136+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:32.735352+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:33.735547+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:34.735793+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:35.736108+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:36.736471+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:37.736728+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:38.736982+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:39.737286+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:40.737589+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:41.737905+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:42.738227+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:43.738596+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:44.738838+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:45.739150+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:46.739437+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:47.739679+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:48.740069+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:49.740280+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:50.740513+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:51.740751+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:52.740988+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:53.741456+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:54.741693+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:55.741971+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:56.742161+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:57.742437+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:58.742685+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:59.742949+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:00.743174+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:01.743430+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:02.743663+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:03.743907+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:04.746460+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:05.746737+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:06.747013+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:07.747237+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:08.747505+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:09.747806+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:10.747996+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:11.748186+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:12.748462+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:13.748729+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:14.748946+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:15.749275+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:16.749520+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:17.749820+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:18.750131+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:19.750399+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:20.750631+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:21.750893+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 16867328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:22.751184+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 16850944 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:23.751465+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16842752 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:24.751745+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16842752 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:25.752034+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16842752 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:26.752288+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16842752 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:27.752604+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16842752 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:28.752900+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16842752 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:29.753133+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16842752 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:30.753326+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7393 writes, 28K keys, 7393 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7393 writes, 1657 syncs, 4.46 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1837 writes, 4854 keys, 1837 commit groups, 1.0 writes per commit group, ingest: 2.52 MB, 0.00 MB/s
                                           Interval WAL: 1837 writes, 802 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16842752 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:31.753549+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16842752 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:32.753736+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16842752 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:33.753915+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16842752 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:34.754087+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16842752 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:35.754292+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16842752 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:36.754482+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16842752 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:37.754643+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: mgrc ms_handle_reset ms_handle_reset con 0x56328dff7c00
Oct 11 05:00:50 compute-0 ceph-osd[89565]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1439243141
Oct 11 05:00:50 compute-0 ceph-osd[89565]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1439243141,v1:192.168.122.100:6801/1439243141]
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: get_auth_request con 0x56329054ec00 auth_method 0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: mgrc handle_mgr_configure stats_period=5
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:38.754845+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:39.754958+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:40.755091+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:41.755268+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:42.755430+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:43.755577+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:44.755898+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:45.756150+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:46.756280+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:47.756478+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:48.756651+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:49.756799+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:50.756946+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:51.757118+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:52.757314+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:53.757552+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:54.757767+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:55.757963+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:56.758114+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:57.758281+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:58.758451+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:59.758630+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:00.758801+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:01.758959+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:02.759076+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:03.759223+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:04.759449+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:05.759652+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:06.759795+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:07.759950+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:08.760124+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:09.760293+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:10.760500+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:11.760735+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:12.760954+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:13.761149+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:14.761439+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:15.761711+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:16.761990+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:17.762188+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:18.762492+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:19.762756+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:20.762990+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:21.763215+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:22.763403+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125441 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:23.763617+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb5cd000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:24.763837+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:25.764182+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 242.459930420s of 242.552703857s, submitted: 34
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:26.764439+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 16670720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:27.764701+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 16629760 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:28.764898+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:29.765101+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:30.765380+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:31.765696+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:32.765960+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:33.766196+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:34.766442+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:35.766748+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:36.766935+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:37.767135+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:38.767438+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:39.767711+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:40.768035+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:41.768294+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:42.768592+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:43.768876+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:44.769277+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:45.769557+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:46.769843+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:47.770084+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:48.770424+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:49.770698+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:50.770952+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:51.771210+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:52.771491+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:53.771871+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:54.772093+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:55.772489+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:56.772748+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:57.773016+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:58.773305+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:59.773671+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:00.773940+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:01.774187+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:02.774509+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:03.774691+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:04.774867+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:05.775853+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:06.777912+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:07.778610+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:08.779190+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:09.779907+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:10.781181+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:11.781716+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:12.782595+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:13.783167+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:14.783444+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:15.784053+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:16.784174+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:17.784406+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:18.784547+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:19.784947+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:20.785093+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:21.785283+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:22.785419+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:23.785565+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:24.785686+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:25.786146+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:26.786392+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:27.786821+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:28.786983+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:29.787269+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:30.787426+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:31.787589+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:32.787774+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:33.788083+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:34.788256+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:35.788432+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:36.788597+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:37.788735+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:38.788862+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:39.789036+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:40.789141+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:41.789300+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:42.789504+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:43.789624+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:44.789763+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:45.789947+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:46.790076+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:47.790234+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:48.790422+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:49.790529+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:50.790702+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:51.790894+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:52.791066+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:53.791222+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:54.791403+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:55.791578+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:56.791753+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:57.791920+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:58.792097+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:59.792227+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:00.792396+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:01.792536+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:02.792714+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:03.792859+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:04.793093+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:05.793537+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 16596992 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:06.793708+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:07.793890+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:08.794094+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:09.795454+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:10.795939+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:11.796888+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:12.797206+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:13.797923+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:14.798442+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:15.798814+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:16.798998+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 16670720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:17.799230+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 16670720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:18.799559+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 16670720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:19.799994+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 16670720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:20.800267+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 16670720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:21.800463+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 16670720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:22.800771+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 16670720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:23.800982+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 16670720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:24.801220+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 16670720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:25.801551+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:26.801714+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:27.801897+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:28.802111+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:29.802244+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:30.802443+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:31.802574+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:32.802748+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:33.802913+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:34.803028+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:35.803315+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:36.803520+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:37.803691+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:38.803808+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:39.804002+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:40.804252+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:41.804485+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:42.804829+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:43.805652+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:44.806006+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:45.806430+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:46.806616+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:47.806914+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:48.807242+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:49.807549+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:50.807813+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:51.808031+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:52.808371+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:53.808731+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:54.808988+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:55.809246+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:56.809512+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 16662528 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:57.809787+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:58.810043+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:59.810385+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:00.810604+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:01.810781+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:02.810902+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:03.811024+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:04.811206+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:05.811438+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:06.811587+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:07.811747+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:08.811900+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:09.812055+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:10.812180+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:11.812322+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:12.812495+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:13.812610+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:14.812728+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 16678912 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:15.812873+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 16670720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:16.812972+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'config diff' '{prefix=config diff}'
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'config show' '{prefix=config show}'
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'counter dump' '{prefix=counter dump}'
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 16687104 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'counter schema' '{prefix=counter schema}'
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:17.813091+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 79118336 unmapped: 16261120 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:50 compute-0 ceph-osd[89565]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:50 compute-0 ceph-osd[89565]: bluestore.MempoolThread(0x56328d27db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124561 data_alloc: 218103808 data_used: 344064
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:18.813199+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb1be000/0x0/0x4ffc00000, data 0x156c294/0x1650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 11 05:00:50 compute-0 ceph-osd[89565]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 16269312 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: tick
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_tickets
Oct 11 05:00:50 compute-0 ceph-osd[89565]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:19.813362+0000)
Oct 11 05:00:50 compute-0 ceph-osd[89565]: do_command 'log dump' '{prefix=log dump}'
Oct 11 05:00:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:00:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 11 05:00:50 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2573320132' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 11 05:00:50 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15097 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:50 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 05:00:50 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 11 05:00:50 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4245633749' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 05:00:50 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15101 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:50 compute-0 ceph-mon[74243]: from='client.15083 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:50 compute-0 ceph-mon[74243]: pgmap v1197: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:50 compute-0 ceph-mon[74243]: from='client.15085 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:50 compute-0 ceph-mon[74243]: from='client.15089 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:50 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2573320132' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 11 05:00:50 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4245633749' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 05:00:50 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 11 05:00:51 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/607618997' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 11 05:00:51 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15105 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 05:00:51 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15109 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 05:00:51 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 11 05:00:51 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2969252335' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 11 05:00:51 compute-0 ceph-mon[74243]: from='client.15093 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:51 compute-0 ceph-mon[74243]: from='client.15097 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:51 compute-0 ceph-mon[74243]: from='client.15101 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:51 compute-0 ceph-mon[74243]: pgmap v1198: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:51 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/607618997' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 11 05:00:51 compute-0 ceph-mon[74243]: from='client.15105 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 05:00:51 compute-0 ceph-mon[74243]: from='client.15109 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 05:00:51 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2969252335' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 11 05:00:52 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15115 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 05:00:52 compute-0 ceph-mgr[74542]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 05:00:52 compute-0 ceph-166d0489-2ae7-59eb-961c-c1b5cda4b45a-mgr-compute-0-phooxi[74538]: 2025-10-11T05:00:52.250+0000 7fe5f0a93640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 05:00:52 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 11 05:00:52 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/835989519' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 11 05:00:52 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 11 05:00:52 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/244563851' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 11 05:00:52 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 11 05:00:52 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2253585315' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 11 05:00:52 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:52 compute-0 ceph-mon[74243]: from='client.15115 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 05:00:52 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/835989519' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 11 05:00:52 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/244563851' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 11 05:00:52 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2253585315' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 11 05:00:53 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 11 05:00:53 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3176688928' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 11 05:00:53 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 11 05:00:53 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1351232100' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 11 05:00:53 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 11 05:00:53 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/49008574' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 11 05:00:53 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 11 05:00:53 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3221167981' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 11 05:00:53 compute-0 crontab[287166]: (root) LIST (root)
Oct 11 05:00:53 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 11 05:00:53 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3911585946' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 11 05:00:53 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 11 05:00:53 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/734166504' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 11 05:00:54 compute-0 ceph-mon[74243]: pgmap v1199: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:54 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3176688928' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 11 05:00:54 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1351232100' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 11 05:00:54 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/49008574' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 11 05:00:54 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3221167981' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 11 05:00:54 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3911585946' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 11 05:00:54 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/734166504' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 11 05:00:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 11 05:00:54 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3394193173' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 11 05:00:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 11 05:00:54 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1856652676' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 11 05:00:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 11 05:00:54 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3805951971' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:50.284651+0000 osd.1 (osd.1) 95 : cluster [DBG] 11.5 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 95) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:50.270570+0000 osd.1 (osd.1) 94 : cluster [DBG] 11.5 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:50.284651+0000 osd.1 (osd.1) 95 : cluster [DBG] 11.5 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 777061 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 106496 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:21.953000+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 98304 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:22.953416+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 98304 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:23.953838+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 98304 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:24.954205+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 90112 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:25.954503+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 777061 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 81920 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:26.954768+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.888096809s of 10.003040314s, submitted: 8
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 73728 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:27.954935+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:57.229918+0000 osd.1 (osd.1) 96 : cluster [DBG] 11.7 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:29:57.243967+0000 osd.1 (osd.1) 97 : cluster [DBG] 11.7 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 97) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:57.229918+0000 osd.1 (osd.1) 96 : cluster [DBG] 11.7 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:29:57.243967+0000 osd.1 (osd.1) 97 : cluster [DBG] 11.7 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 73728 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:28.955216+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 65536 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:29.955399+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 65536 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:30.955593+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 778209 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 65536 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:31.955735+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 57344 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:32.955889+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 57344 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:33.956031+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.a scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.a scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 49152 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:34.956227+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:04.141533+0000 osd.1 (osd.1) 98 : cluster [DBG] 11.a scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:04.155744+0000 osd.1 (osd.1) 99 : cluster [DBG] 11.a scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 99) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:04.141533+0000 osd.1 (osd.1) 98 : cluster [DBG] 11.a scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:04.155744+0000 osd.1 (osd.1) 99 : cluster [DBG] 11.a scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 49152 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:35.956412+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 779357 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 40960 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:36.956554+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 40960 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:37.956711+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.c scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.917269707s of 10.924452782s, submitted: 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.c scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 40960 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:38.956865+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:08.168312+0000 osd.1 (osd.1) 100 : cluster [DBG] 11.c scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:08.182444+0000 osd.1 (osd.1) 101 : cluster [DBG] 11.c scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 101) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:08.168312+0000 osd.1 (osd.1) 100 : cluster [DBG] 11.c scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:08.182444+0000 osd.1 (osd.1) 101 : cluster [DBG] 11.c scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 32768 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:39.957082+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 32768 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:40.957217+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.13 deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.13 deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 781654 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 24576 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:41.957411+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:11.225658+0000 osd.1 (osd.1) 102 : cluster [DBG] 11.13 deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:11.239831+0000 osd.1 (osd.1) 103 : cluster [DBG] 11.13 deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 103) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:11.225658+0000 osd.1 (osd.1) 102 : cluster [DBG] 11.13 deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:11.239831+0000 osd.1 (osd.1) 103 : cluster [DBG] 11.13 deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 24576 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:42.957709+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 16384 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:43.957858+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 16384 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:44.958009+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 16384 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:45.958103+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782803 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 0 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:46.958249+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:16.239201+0000 osd.1 (osd.1) 104 : cluster [DBG] 11.16 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:16.253303+0000 osd.1 (osd.1) 105 : cluster [DBG] 11.16 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 105) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:16.239201+0000 osd.1 (osd.1) 104 : cluster [DBG] 11.16 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:16.253303+0000 osd.1 (osd.1) 105 : cluster [DBG] 11.16 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 0 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:47.958392+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.075263977s of 10.095630646s, submitted: 6
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:48.958549+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:18.263776+0000 osd.1 (osd.1) 106 : cluster [DBG] 11.1d scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:18.277845+0000 osd.1 (osd.1) 107 : cluster [DBG] 11.1d scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 107) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:18.263776+0000 osd.1 (osd.1) 106 : cluster [DBG] 11.1d scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:18.277845+0000 osd.1 (osd.1) 107 : cluster [DBG] 11.1d scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:49.958822+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:50.958950+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 785100 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:51.959055+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:21.285041+0000 osd.1 (osd.1) 108 : cluster [DBG] 5.11 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:21.299161+0000 osd.1 (osd.1) 109 : cluster [DBG] 5.11 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 109) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:21.285041+0000 osd.1 (osd.1) 108 : cluster [DBG] 5.11 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:21.299161+0000 osd.1 (osd.1) 109 : cluster [DBG] 5.11 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 1015808 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:52.959298+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 1007616 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:53.959378+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 1007616 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:54.959518+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 999424 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:55.959635+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 785100 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 991232 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:56.959790+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 983040 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:57.959915+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 983040 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:58.960064+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 983040 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:59.960252+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 974848 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:00.960424+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.090151787s of 13.102847099s, submitted: 4
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 786248 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 974848 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:01.960544+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:31.366679+0000 osd.1 (osd.1) 110 : cluster [DBG] 2.17 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:31.380696+0000 osd.1 (osd.1) 111 : cluster [DBG] 2.17 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 111) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:31.366679+0000 osd.1 (osd.1) 110 : cluster [DBG] 2.17 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:31.380696+0000 osd.1 (osd.1) 111 : cluster [DBG] 2.17 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 974848 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:02.960705+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.13 deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.13 deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 966656 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:03.960834+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:33.400863+0000 osd.1 (osd.1) 112 : cluster [DBG] 5.13 deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:33.414957+0000 osd.1 (osd.1) 113 : cluster [DBG] 5.13 deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 113) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:33.400863+0000 osd.1 (osd.1) 112 : cluster [DBG] 5.13 deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:33.414957+0000 osd.1 (osd.1) 113 : cluster [DBG] 5.13 deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.15 deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.15 deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 958464 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:04.961048+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:34.427300+0000 osd.1 (osd.1) 114 : cluster [DBG] 2.15 deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:34.441389+0000 osd.1 (osd.1) 115 : cluster [DBG] 2.15 deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 115) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:34.427300+0000 osd.1 (osd.1) 114 : cluster [DBG] 2.15 deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:34.441389+0000 osd.1 (osd.1) 115 : cluster [DBG] 2.15 deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.12 deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.12 deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 950272 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:05.961259+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:35.455580+0000 osd.1 (osd.1) 116 : cluster [DBG] 5.12 deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:35.469699+0000 osd.1 (osd.1) 117 : cluster [DBG] 5.12 deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 117) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:35.455580+0000 osd.1 (osd.1) 116 : cluster [DBG] 5.12 deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:35.469699+0000 osd.1 (osd.1) 117 : cluster [DBG] 5.12 deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 789692 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 950272 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:06.961514+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 942080 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:07.961653+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 942080 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:08.961772+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 942080 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:09.961965+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 933888 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:10.962108+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.115386009s of 10.143128395s, submitted: 8
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 790841 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 933888 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:11.962258+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:41.509792+0000 osd.1 (osd.1) 118 : cluster [DBG] 10.1a scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:41.523906+0000 osd.1 (osd.1) 119 : cluster [DBG] 10.1a scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 119) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:41.509792+0000 osd.1 (osd.1) 118 : cluster [DBG] 10.1a scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:41.523906+0000 osd.1 (osd.1) 119 : cluster [DBG] 10.1a scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 925696 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:12.962446+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 917504 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:13.962584+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:43.505893+0000 osd.1 (osd.1) 120 : cluster [DBG] 10.19 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:43.519802+0000 osd.1 (osd.1) 121 : cluster [DBG] 10.19 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 121) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:43.505893+0000 osd.1 (osd.1) 120 : cluster [DBG] 10.19 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:43.519802+0000 osd.1 (osd.1) 121 : cluster [DBG] 10.19 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 917504 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:14.962847+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 901120 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:15.962988+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:45.449498+0000 osd.1 (osd.1) 122 : cluster [DBG] 5.16 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:45.463546+0000 osd.1 (osd.1) 123 : cluster [DBG] 5.16 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 123) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:45.449498+0000 osd.1 (osd.1) 122 : cluster [DBG] 5.16 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:45.463546+0000 osd.1 (osd.1) 123 : cluster [DBG] 5.16 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 794285 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 901120 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:16.963176+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:46.409154+0000 osd.1 (osd.1) 124 : cluster [DBG] 5.9 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:46.423234+0000 osd.1 (osd.1) 125 : cluster [DBG] 5.9 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 125) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:46.409154+0000 osd.1 (osd.1) 124 : cluster [DBG] 5.9 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:46.423234+0000 osd.1 (osd.1) 125 : cluster [DBG] 5.9 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 892928 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:17.963403+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 892928 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:18.963565+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 884736 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:19.963740+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 884736 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:20.963871+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 794285 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 884736 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:21.964043+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.f scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.827279091s of 10.856827736s, submitted: 8
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.f scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 876544 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:22.964181+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:52.366819+0000 osd.1 (osd.1) 126 : cluster [DBG] 5.f scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:52.380757+0000 osd.1 (osd.1) 127 : cluster [DBG] 5.f scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 127) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:52.366819+0000 osd.1 (osd.1) 126 : cluster [DBG] 5.f scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:52.380757+0000 osd.1 (osd.1) 127 : cluster [DBG] 5.f scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 876544 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:23.964394+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 868352 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:24.964507+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:25.964599+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:55.333878+0000 osd.1 (osd.1) 128 : cluster [DBG] 10.6 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:55.348066+0000 osd.1 (osd.1) 129 : cluster [DBG] 10.6 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 860160 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 129) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:55.333878+0000 osd.1 (osd.1) 128 : cluster [DBG] 10.6 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:55.348066+0000 osd.1 (osd.1) 129 : cluster [DBG] 10.6 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796580 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:26.964790+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 851968 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:27.964914+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 851968 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:28.965061+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 851968 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.d deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.d deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:29.965236+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:59.439670+0000 osd.1 (osd.1) 130 : cluster [DBG] 2.d deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:30:59.453928+0000 osd.1 (osd.1) 131 : cluster [DBG] 2.d deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 843776 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.b scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.b scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 131) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:59.439670+0000 osd.1 (osd.1) 130 : cluster [DBG] 2.d deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:30:59.453928+0000 osd.1 (osd.1) 131 : cluster [DBG] 2.d deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:30.965420+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:00.487304+0000 osd.1 (osd.1) 132 : cluster [DBG] 10.b scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:00.500763+0000 osd.1 (osd.1) 133 : cluster [DBG] 10.b scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 827392 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 133) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:00.487304+0000 osd.1 (osd.1) 132 : cluster [DBG] 10.b scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:00.500763+0000 osd.1 (osd.1) 133 : cluster [DBG] 10.b scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 798875 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:31.965796+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 827392 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:32.965923+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 819200 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:33.966057+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 819200 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:34.966220+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 811008 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:35.966379+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 811008 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 798875 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:36.966536+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 802816 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.976668358s of 15.009184837s, submitted: 8
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:37.966679+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:07.375810+0000 osd.1 (osd.1) 134 : cluster [DBG] 10.2 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:07.389929+0000 osd.1 (osd.1) 135 : cluster [DBG] 10.2 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 135) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:07.375810+0000 osd.1 (osd.1) 134 : cluster [DBG] 10.2 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:07.389929+0000 osd.1 (osd.1) 135 : cluster [DBG] 10.2 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:38.966893+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:39.967098+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:09.358230+0000 osd.1 (osd.1) 136 : cluster [DBG] 2.5 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:09.371651+0000 osd.1 (osd.1) 137 : cluster [DBG] 2.5 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 137) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:09.358230+0000 osd.1 (osd.1) 136 : cluster [DBG] 2.5 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:09.371651+0000 osd.1 (osd.1) 137 : cluster [DBG] 2.5 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.a scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.a scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:40.967413+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:10.371094+0000 osd.1 (osd.1) 138 : cluster [DBG] 2.a scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:10.385167+0000 osd.1 (osd.1) 139 : cluster [DBG] 2.a scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 139) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:10.371094+0000 osd.1 (osd.1) 138 : cluster [DBG] 2.a scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:10.385167+0000 osd.1 (osd.1) 139 : cluster [DBG] 2.a scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 803464 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:41.967646+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:11.350620+0000 osd.1 (osd.1) 140 : cluster [DBG] 5.c scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:11.364688+0000 osd.1 (osd.1) 141 : cluster [DBG] 5.c scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 141) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:11.350620+0000 osd.1 (osd.1) 140 : cluster [DBG] 5.c scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:11.364688+0000 osd.1 (osd.1) 141 : cluster [DBG] 5.c scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:42.967908+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:43.968232+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:44.968528+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:45.968828+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:15.408884+0000 osd.1 (osd.1) 142 : cluster [DBG] 2.9 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:15.423000+0000 osd.1 (osd.1) 143 : cluster [DBG] 2.9 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 761856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 143) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:15.408884+0000 osd.1 (osd.1) 142 : cluster [DBG] 2.9 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:15.423000+0000 osd.1 (osd.1) 143 : cluster [DBG] 2.9 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805759 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:46.969306+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:16.419688+0000 osd.1 (osd.1) 144 : cluster [DBG] 4.14 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:16.433816+0000 osd.1 (osd.1) 145 : cluster [DBG] 4.14 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 761856 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 145) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:16.419688+0000 osd.1 (osd.1) 144 : cluster [DBG] 4.14 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:16.433816+0000 osd.1 (osd.1) 145 : cluster [DBG] 4.14 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:47.969610+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 753664 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.012090683s of 11.058633804s, submitted: 12
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:48.969826+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:18.434750+0000 osd.1 (osd.1) 146 : cluster [DBG] 4.12 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:18.448688+0000 osd.1 (osd.1) 147 : cluster [DBG] 4.12 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1802240 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 147) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:18.434750+0000 osd.1 (osd.1) 146 : cluster [DBG] 4.12 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:18.448688+0000 osd.1 (osd.1) 147 : cluster [DBG] 4.12 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:49.970114+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:19.454433+0000 osd.1 (osd.1) 148 : cluster [DBG] 4.f scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:19.468650+0000 osd.1 (osd.1) 149 : cluster [DBG] 4.f scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1794048 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 149) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:19.454433+0000 osd.1 (osd.1) 148 : cluster [DBG] 4.f scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:19.468650+0000 osd.1 (osd.1) 149 : cluster [DBG] 4.f scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:50.970382+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1785856 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 808054 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:51.970575+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1777664 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:52.970700+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1777664 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:53.970866+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 1769472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:54.971057+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 1769472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:55.971216+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:25.598089+0000 osd.1 (osd.1) 150 : cluster [DBG] 4.10 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:25.612133+0000 osd.1 (osd.1) 151 : cluster [DBG] 4.10 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 1769472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 151) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:25.598089+0000 osd.1 (osd.1) 150 : cluster [DBG] 4.10 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:25.612133+0000 osd.1 (osd.1) 151 : cluster [DBG] 4.10 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 809202 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:56.971423+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 1769472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:57.971562+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 1769472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:58.971723+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 1761280 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.112494469s of 11.135998726s, submitted: 6
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:59.972247+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:29.570895+0000 osd.1 (osd.1) 152 : cluster [DBG] 4.d scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:29.584774+0000 osd.1 (osd.1) 153 : cluster [DBG] 4.d scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 1761280 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 153) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:29.570895+0000 osd.1 (osd.1) 152 : cluster [DBG] 4.d scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:29.584774+0000 osd.1 (osd.1) 153 : cluster [DBG] 4.d scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:00.972405+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:30.576218+0000 osd.1 (osd.1) 154 : cluster [DBG] 6.1 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:30.590362+0000 osd.1 (osd.1) 155 : cluster [DBG] 6.1 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 1761280 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 155) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:30.576218+0000 osd.1 (osd.1) 154 : cluster [DBG] 6.1 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:30.590362+0000 osd.1 (osd.1) 155 : cluster [DBG] 6.1 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812643 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:01.972945+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:31.622934+0000 osd.1 (osd.1) 156 : cluster [DBG] 4.9 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:31.637036+0000 osd.1 (osd.1) 157 : cluster [DBG] 4.9 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1753088 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 157) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:31.622934+0000 osd.1 (osd.1) 156 : cluster [DBG] 4.9 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:31.637036+0000 osd.1 (osd.1) 157 : cluster [DBG] 4.9 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:02.973560+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:32.631401+0000 osd.1 (osd.1) 158 : cluster [DBG] 4.5 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:32.645470+0000 osd.1 (osd.1) 159 : cluster [DBG] 4.5 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1753088 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 159) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:32.631401+0000 osd.1 (osd.1) 158 : cluster [DBG] 4.5 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:32.645470+0000 osd.1 (osd.1) 159 : cluster [DBG] 4.5 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:03.973801+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:33.589326+0000 osd.1 (osd.1) 160 : cluster [DBG] 4.7 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:33.603410+0000 osd.1 (osd.1) 161 : cluster [DBG] 4.7 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1728512 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 161) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:33.589326+0000 osd.1 (osd.1) 160 : cluster [DBG] 4.7 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:33.603410+0000 osd.1 (osd.1) 161 : cluster [DBG] 4.7 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:04.974045+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1728512 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:05.974198+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 1720320 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 814937 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:06.974345+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 1720320 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:07.974489+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:37.527604+0000 osd.1 (osd.1) 162 : cluster [DBG] 4.8 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:37.541727+0000 osd.1 (osd.1) 163 : cluster [DBG] 4.8 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1712128 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 163) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:37.527604+0000 osd.1 (osd.1) 162 : cluster [DBG] 4.8 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:37.541727+0000 osd.1 (osd.1) 163 : cluster [DBG] 4.8 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.f scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.f scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:08.974853+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:38.542673+0000 osd.1 (osd.1) 164 : cluster [DBG] 10.f scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:38.556753+0000 osd.1 (osd.1) 165 : cluster [DBG] 10.f scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1712128 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 165) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:38.542673+0000 osd.1 (osd.1) 164 : cluster [DBG] 10.f scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:38.556753+0000 osd.1 (osd.1) 165 : cluster [DBG] 10.f scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:09.975025+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1703936 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.929329872s of 10.991934776s, submitted: 14
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:10.975152+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:40.562574+0000 osd.1 (osd.1) 166 : cluster [DBG] 2.7 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:40.576673+0000 osd.1 (osd.1) 167 : cluster [DBG] 2.7 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1703936 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 167) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:40.562574+0000 osd.1 (osd.1) 166 : cluster [DBG] 2.7 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:40.576673+0000 osd.1 (osd.1) 167 : cluster [DBG] 2.7 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:11.975418+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:41.521253+0000 osd.1 (osd.1) 168 : cluster [DBG] 2.6 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:41.535436+0000 osd.1 (osd.1) 169 : cluster [DBG] 2.6 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 819526 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1695744 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 169) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:41.521253+0000 osd.1 (osd.1) 168 : cluster [DBG] 2.6 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:41.535436+0000 osd.1 (osd.1) 169 : cluster [DBG] 2.6 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:12.975690+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1695744 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:13.975815+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1695744 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:14.975947+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1695744 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:15.976123+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1679360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:16.976274+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 819526 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1679360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:17.976383+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:47.518531+0000 osd.1 (osd.1) 170 : cluster [DBG] 5.1 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:47.532598+0000 osd.1 (osd.1) 171 : cluster [DBG] 5.1 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 171) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:47.518531+0000 osd.1 (osd.1) 170 : cluster [DBG] 5.1 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:47.532598+0000 osd.1 (osd.1) 171 : cluster [DBG] 5.1 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:18.976592+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:19.976755+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:49.516313+0000 osd.1 (osd.1) 172 : cluster [DBG] 10.11 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:49.530387+0000 osd.1 (osd.1) 173 : cluster [DBG] 10.11 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 173) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:49.516313+0000 osd.1 (osd.1) 172 : cluster [DBG] 10.11 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:49.530387+0000 osd.1 (osd.1) 173 : cluster [DBG] 10.11 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:20.976925+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:50.501402+0000 osd.1 (osd.1) 174 : cluster [DBG] 10.10 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:50.515412+0000 osd.1 (osd.1) 175 : cluster [DBG] 10.10 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 175) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:50.501402+0000 osd.1 (osd.1) 174 : cluster [DBG] 10.10 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:50.515412+0000 osd.1 (osd.1) 175 : cluster [DBG] 10.10 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.907430649s of 10.942457199s, submitted: 10
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:21.977152+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:51.505033+0000 osd.1 (osd.1) 176 : cluster [DBG] 2.1b scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:51.519096+0000 osd.1 (osd.1) 177 : cluster [DBG] 2.1b scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824119 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1646592 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 177) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:51.505033+0000 osd.1 (osd.1) 176 : cluster [DBG] 2.1b scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:51.519096+0000 osd.1 (osd.1) 177 : cluster [DBG] 2.1b scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:22.977422+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1646592 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:23.977617+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1646592 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:24.977765+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1638400 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:25.977923+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1638400 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:26.978070+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824119 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1630208 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.12 deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.12 deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:27.978218+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:57.544059+0000 osd.1 (osd.1) 178 : cluster [DBG] 10.12 deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:57.558158+0000 osd.1 (osd.1) 179 : cluster [DBG] 10.12 deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 1622016 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 179) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:57.544059+0000 osd.1 (osd.1) 178 : cluster [DBG] 10.12 deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:57.558158+0000 osd.1 (osd.1) 179 : cluster [DBG] 10.12 deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:28.978395+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:58.550164+0000 osd.1 (osd.1) 180 : cluster [DBG] 5.1d scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:31:58.564183+0000 osd.1 (osd.1) 181 : cluster [DBG] 5.1d scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 1613824 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 181) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:58.550164+0000 osd.1 (osd.1) 180 : cluster [DBG] 5.1d scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:31:58.564183+0000 osd.1 (osd.1) 181 : cluster [DBG] 5.1d scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:29.978570+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 1613824 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:30.978719+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1605632 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:31.978849+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826416 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1605632 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.966858864s of 10.997442245s, submitted: 6
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:32.978970+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:02.502532+0000 osd.1 (osd.1) 182 : cluster [DBG] 5.1a scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:02.516728+0000 osd.1 (osd.1) 183 : cluster [DBG] 5.1a scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1605632 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 183) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:02.502532+0000 osd.1 (osd.1) 182 : cluster [DBG] 5.1a scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:02.516728+0000 osd.1 (osd.1) 183 : cluster [DBG] 5.1a scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:33.979194+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:03.541654+0000 osd.1 (osd.1) 184 : cluster [DBG] 10.13 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:03.555791+0000 osd.1 (osd.1) 185 : cluster [DBG] 10.13 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1597440 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 185) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:03.541654+0000 osd.1 (osd.1) 184 : cluster [DBG] 10.13 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:03.555791+0000 osd.1 (osd.1) 185 : cluster [DBG] 10.13 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:34.979431+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:04.543710+0000 osd.1 (osd.1) 186 : cluster [DBG] 10.14 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:04.561321+0000 osd.1 (osd.1) 187 : cluster [DBG] 10.14 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1597440 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 187) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:04.543710+0000 osd.1 (osd.1) 186 : cluster [DBG] 10.14 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:04.561321+0000 osd.1 (osd.1) 187 : cluster [DBG] 10.14 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:35.979690+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:05.512407+0000 osd.1 (osd.1) 188 : cluster [DBG] 5.18 deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:05.526487+0000 osd.1 (osd.1) 189 : cluster [DBG] 5.18 deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1589248 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 189) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:05.512407+0000 osd.1 (osd.1) 188 : cluster [DBG] 5.18 deep-scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:05.526487+0000 osd.1 (osd.1) 189 : cluster [DBG] 5.18 deep-scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:36.979972+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831010 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1589248 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:37.980153+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:07.531605+0000 osd.1 (osd.1) 190 : cluster [DBG] 5.19 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:07.545940+0000 osd.1 (osd.1) 191 : cluster [DBG] 5.19 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1581056 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 191) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:07.531605+0000 osd.1 (osd.1) 190 : cluster [DBG] 5.19 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:07.545940+0000 osd.1 (osd.1) 191 : cluster [DBG] 5.19 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:38.980380+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1572864 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:39.980560+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1572864 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:40.980688+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1564672 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:41.980833+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 832158 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1564672 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.024751663s of 10.058527946s, submitted: 10
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:42.981002+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:12.561299+0000 osd.1 (osd.1) 192 : cluster [DBG] 6.2 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:12.575221+0000 osd.1 (osd.1) 193 : cluster [DBG] 6.2 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 1556480 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 193) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:12.561299+0000 osd.1 (osd.1) 192 : cluster [DBG] 6.2 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:12.575221+0000 osd.1 (osd.1) 193 : cluster [DBG] 6.2 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:43.981217+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 1622016 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:44.981447+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 1622016 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:45.981613+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 1622016 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:46.981755+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 833305 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 1613824 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:47.981881+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:17.550111+0000 osd.1 (osd.1) 194 : cluster [DBG] 6.6 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:17.567715+0000 osd.1 (osd.1) 195 : cluster [DBG] 6.6 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 195) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:17.550111+0000 osd.1 (osd.1) 194 : cluster [DBG] 6.6 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:17.567715+0000 osd.1 (osd.1) 195 : cluster [DBG] 6.6 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1605632 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:48.982157+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1605632 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:49.982410+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:19.566702+0000 osd.1 (osd.1) 196 : cluster [DBG] 6.e scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:19.584322+0000 osd.1 (osd.1) 197 : cluster [DBG] 6.e scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 197) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:19.566702+0000 osd.1 (osd.1) 196 : cluster [DBG] 6.e scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:19.584322+0000 osd.1 (osd.1) 197 : cluster [DBG] 6.e scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1597440 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:50.982685+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1597440 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.c scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.c scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:51.982918+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:21.494394+0000 osd.1 (osd.1) 198 : cluster [DBG] 6.c scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:21.512054+0000 osd.1 (osd.1) 199 : cluster [DBG] 6.c scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 199) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:21.494394+0000 osd.1 (osd.1) 198 : cluster [DBG] 6.c scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:21.512054+0000 osd.1 (osd.1) 199 : cluster [DBG] 6.c scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836746 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1581056 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:52.983240+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:22.506995+0000 osd.1 (osd.1) 200 : cluster [DBG] 6.4 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:22.535023+0000 osd.1 (osd.1) 201 : cluster [DBG] 6.4 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 201) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:22.506995+0000 osd.1 (osd.1) 200 : cluster [DBG] 6.4 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:22.535023+0000 osd.1 (osd.1) 201 : cluster [DBG] 6.4 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1581056 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:53.983461+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1572864 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:54.983637+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1572864 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.b scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.874675751s of 12.912480354s, submitted: 10
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.b scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:55.983780+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:25.473662+0000 osd.1 (osd.1) 202 : cluster [DBG] 6.b scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:25.491313+0000 osd.1 (osd.1) 203 : cluster [DBG] 6.b scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 203) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:25.473662+0000 osd.1 (osd.1) 202 : cluster [DBG] 6.b scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:25.491313+0000 osd.1 (osd.1) 203 : cluster [DBG] 6.b scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1564672 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.d scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 6.d scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:56.983998+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:26.438648+0000 osd.1 (osd.1) 204 : cluster [DBG] 6.d scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:26.459842+0000 osd.1 (osd.1) 205 : cluster [DBG] 6.d scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840187 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 205) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:26.438648+0000 osd.1 (osd.1) 204 : cluster [DBG] 6.d scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:26.459842+0000 osd.1 (osd.1) 205 : cluster [DBG] 6.d scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1564672 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:57.984249+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:27.463674+0000 osd.1 (osd.1) 206 : cluster [DBG] 9.15 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:27.495388+0000 osd.1 (osd.1) 207 : cluster [DBG] 9.15 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 207) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:27.463674+0000 osd.1 (osd.1) 206 : cluster [DBG] 9.15 scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:27.495388+0000 osd.1 (osd.1) 207 : cluster [DBG] 9.15 scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 1556480 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:58.984442+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  log_queue is 2 last_log 209 sent 207 num 2 unsent 2 sending 2
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:28.510218+0000 osd.1 (osd.1) 208 : cluster [DBG] 9.1f scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  will send 2025-10-11T04:32:28.545517+0000 osd.1 (osd.1) 209 : cluster [DBG] 9.1f scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client handle_log_ack log(last 209) v1
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:28.510218+0000 osd.1 (osd.1) 208 : cluster [DBG] 9.1f scrub starts
Oct 11 05:00:54 compute-0 ceph-osd[88467]: log_client  logged 2025-10-11T04:32:28.545517+0000 osd.1 (osd.1) 209 : cluster [DBG] 9.1f scrub ok
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 1556480 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:59.984681+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1548288 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:00.984836+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1548288 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:01.984964+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1540096 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:02.985141+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1540096 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:03.985294+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1531904 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:04.985488+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1531904 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:05.985676+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 1523712 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:06.986128+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1531904 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:07.986250+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 1523712 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:08.986402+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 1523712 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:09.986537+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1515520 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:10.986657+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1515520 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:11.986799+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1515520 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:12.986938+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 1507328 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:13.987053+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 1507328 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:14.987231+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 1499136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:15.987403+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 1499136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:16.987585+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 1499136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:17.987773+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 1490944 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:18.987897+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 1490944 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:19.988100+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 1482752 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:20.988222+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 1482752 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:21.988417+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 1482752 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:22.988554+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1474560 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:23.988671+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1474560 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:24.988825+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 1466368 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:25.989000+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 1466368 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:26.989120+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1458176 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:27.989279+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1458176 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:28.989446+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1458176 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:29.989628+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 1449984 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:30.989790+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 1449984 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:31.989948+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 1441792 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:32.990179+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 1441792 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:33.990398+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 1441792 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:34.990689+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 1433600 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:35.990885+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 1433600 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:36.991026+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 1425408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:37.991225+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 1425408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:38.991413+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1417216 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:39.991621+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1417216 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:40.991771+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 1409024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:41.991956+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 1409024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:42.992165+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 1409024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:43.992306+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1400832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:44.992531+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1400832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:45.992686+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1392640 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:46.992838+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1392640 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:47.993017+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 1384448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:48.993192+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 1384448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:49.993439+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1376256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:50.993605+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1376256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:51.993796+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1376256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:52.993959+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1368064 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:53.994085+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1368064 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:54.994259+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1359872 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:55.994418+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1359872 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:56.994568+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1351680 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:57.994726+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1343488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:58.994849+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1343488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:59.995108+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1335296 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:00.995253+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1335296 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:01.995396+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1335296 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:02.995570+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1327104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:03.995779+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1327104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:04.995944+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 1318912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:05.996133+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 1318912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:06.996293+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 1318912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:07.996457+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:08.996644+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:09.996849+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:10.997002+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:11.997158+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1294336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:12.997285+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1294336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:13.997452+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1294336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:14.997586+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1286144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:15.997737+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1286144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:16.997904+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1286144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:17.998077+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 1277952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:18.998260+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 1277952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:19.998448+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1269760 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:20.998567+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1269760 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:21.998739+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 1261568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:22.998934+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 1261568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:23.999153+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 1261568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:24.999439+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1253376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:25.999707+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1253376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:26.999889+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1245184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:28.000132+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1245184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:29.000373+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 1236992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:30.000521+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 1236992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:31.000629+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 1228800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:32.000744+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 1228800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:33.000887+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 1228800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:34.001033+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:35.001214+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:36.001373+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1212416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:37.001514+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1212416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:38.001679+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 1204224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:39.001897+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 1204224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:40.002081+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 1204224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:41.002293+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1196032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:42.002376+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 1187840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:43.002570+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 1187840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:44.002729+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1179648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:45.002895+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1179648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:46.003042+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1171456 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:47.003182+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1171456 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:48.003363+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 1163264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:49.003525+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 1163264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:50.003727+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 1163264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:51.003866+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1155072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:52.004036+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1155072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:53.004205+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 1146880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:54.004366+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 1146880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:55.004535+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1138688 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:56.004720+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1138688 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:57.004885+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 1130496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:58.005070+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 1122304 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:59.005230+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 1122304 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:00.005438+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 1114112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:01.005598+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 1114112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:02.005768+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 1114112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:03.005959+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 1105920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:04.006297+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 1105920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:05.006635+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 1097728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:06.016326+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 1097728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:07.016663+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 1089536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:08.017010+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 1089536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:09.017443+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1081344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:10.017728+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1081344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:11.018000+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1081344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:12.018261+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1073152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:13.018436+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1073152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:14.018641+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 1064960 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:15.018799+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 1064960 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:16.018971+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1056768 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:17.019149+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1056768 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:18.019286+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1056768 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:19.019412+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1048576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:20.019582+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1048576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:21.019777+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1048576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:22.019921+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:23.020076+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:24.020277+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:25.020455+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:26.020583+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:27.020723+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1024000 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:28.020880+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1024000 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:29.021013+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1024000 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:30.021166+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:31.021289+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:32.021392+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 1007616 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:33.021559+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 1007616 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:34.021684+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 999424 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:35.021865+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 999424 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:36.021997+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 991232 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:37.022150+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 991232 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:38.022306+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 991232 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:39.022436+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 983040 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:40.022626+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 983040 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:41.022790+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 974848 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:42.022938+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 974848 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:43.023079+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 974848 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:44.023238+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:45.023407+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:46.023557+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 958464 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:47.023684+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 958464 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:48.023828+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 958464 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:49.023960+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 950272 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:50.024143+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 950272 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:51.024307+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:52.024509+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:53.024642+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:54.024800+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 933888 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:55.024931+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 933888 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:56.025044+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:57.025161+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:58.025284+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:59.025469+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:00.025672+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:01.025830+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:02.026001+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:03.026172+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 892928 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:04.026288+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 892928 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:05.026451+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 884736 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:06.026572+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 892928 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:07.026709+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 892928 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:08.026834+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 884736 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:09.026957+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 884736 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:10.027148+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:11.027279+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:12.027482+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 868352 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:13.027639+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 868352 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:14.027785+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 868352 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:15.027931+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:16.028090+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:17.028207+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:18.028320+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:19.028467+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:20.028565+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 843776 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:21.028679+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 843776 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:22.028855+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 835584 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:23.028999+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 835584 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:24.029185+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 827392 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:25.029419+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 827392 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:26.029617+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 827392 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:27.029738+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 819200 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:28.029937+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 819200 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:29.030060+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 811008 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:30.030666+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 811008 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:31.030851+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 802816 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:32.030990+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 802816 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:33.031133+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 802816 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:34.031304+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 794624 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:35.031388+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 794624 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:36.031538+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 786432 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:37.031681+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 786432 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:38.031826+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 786432 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:39.032011+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 778240 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:40.032197+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 778240 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:41.032319+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 770048 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:42.032443+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 770048 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:43.032570+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 761856 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:44.032738+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 761856 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:45.032871+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 761856 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:46.033022+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 753664 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:47.033236+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 753664 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:48.033425+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 745472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:49.033600+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 745472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:50.033830+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 745472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:51.034049+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 737280 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:52.034287+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 737280 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:53.034437+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 729088 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:54.034617+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 729088 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:55.034737+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 720896 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:56.034899+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 720896 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:57.035048+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 720896 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:58.035230+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 712704 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:59.047566+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 712704 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:00.047735+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 704512 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:01.047865+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 704512 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:02.048065+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 696320 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:03.048227+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 696320 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:04.048426+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 696320 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:05.048544+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76849152 unmapped: 688128 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:06.048791+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76849152 unmapped: 688128 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:07.048995+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 679936 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:08.049158+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 679936 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:09.049317+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:10.049492+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 671744 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:11.049622+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 671744 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:12.049822+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 663552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:13.049968+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 655360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:14.050163+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 655360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:15.050322+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 647168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:16.050501+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 647168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:17.050643+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 638976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:18.050784+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 638976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:19.050995+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 630784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:20.051231+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 630784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:21.051404+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 630784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:22.051549+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 622592 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:23.051690+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 622592 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:24.051879+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 622592 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:25.052050+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 614400 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 6872 writes, 28K keys, 6872 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6872 writes, 1211 syncs, 5.67 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6872 writes, 28K keys, 6872 commit groups, 1.0 writes per commit group, ingest: 19.51 MB, 0.03 MB/s
                                           Interval WAL: 6872 writes, 1211 syncs, 5.67 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:26.052212+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 557056 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:27.052345+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 548864 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:28.052460+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 548864 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:29.052584+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 540672 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:30.052706+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 540672 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:31.052823+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 540672 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:32.052989+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 532480 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:33.053129+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 532480 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:34.053284+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 524288 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:35.053437+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 524288 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:36.053609+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 516096 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:37.053735+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 516096 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:38.053872+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 516096 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:39.054029+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 507904 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:40.054190+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 507904 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:41.054302+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 499712 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:42.054453+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 499712 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:43.054558+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 491520 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:44.054820+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 491520 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:45.054967+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 491520 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:46.055155+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 483328 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:47.055371+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 483328 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:48.055538+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:49.055724+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:50.055940+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 466944 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:51.056099+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 466944 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:52.056263+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 466944 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:53.056388+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 458752 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:54.056538+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 458752 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:55.056696+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 458752 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:56.056904+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 450560 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:57.057073+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 450560 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:58.057212+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 442368 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:59.057403+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 442368 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:00.057577+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 434176 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:01.057741+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 434176 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:02.057871+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 434176 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:03.058048+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 425984 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:04.058216+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 425984 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:05.058352+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 417792 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:06.058521+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 417792 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:07.058672+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 409600 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:08.058813+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 409600 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:09.058971+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 409600 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:10.059177+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:11.059321+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:12.059496+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 393216 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:13.059630+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 393216 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:14.059779+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 385024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:15.059939+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 385024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:16.060193+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 376832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:17.060402+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 376832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:18.060696+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 376832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:19.060921+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 368640 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:20.061203+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 368640 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:21.061347+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 360448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:22.061520+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 360448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:23.061740+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 360448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:24.061902+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 352256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:25.062067+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 352256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:26.062213+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 330.620880127s of 330.648864746s, submitted: 8
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [0,0,1])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:27.062389+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 425984 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:28.062544+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:29.062702+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:30.063523+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:31.063664+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:32.063833+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:33.064014+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:34.064172+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:35.064456+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:36.064645+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:37.064819+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:38.064997+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 393216 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:39.065190+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 393216 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:40.065394+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 385024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:41.065604+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 385024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:42.065778+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 385024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:43.065926+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 376832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:44.068485+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 376832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:45.068625+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 368640 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:46.068759+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 368640 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:47.068893+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 360448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:48.069008+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 360448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:49.069195+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 352256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:50.069389+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 352256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:51.069591+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 352256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:52.069708+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 344064 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:53.069868+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 344064 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:54.070063+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 335872 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:55.070237+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 335872 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:56.070361+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 327680 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:57.070495+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 327680 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:58.070690+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 327680 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:59.070840+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 319488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:00.071048+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 319488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:01.071217+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 311296 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:02.071432+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 311296 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:03.071590+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 303104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:04.071795+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 303104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:05.071990+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 303104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:06.072173+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 294912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:07.072342+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 294912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:08.072444+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:09.072585+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:10.072762+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:11.072957+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 278528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:12.073127+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 278528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:13.073275+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:14.073424+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:15.073617+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:16.073862+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:17.074029+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:18.074188+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:19.074391+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:20.074614+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:21.074737+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 245760 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:22.074873+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 245760 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:23.075016+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:24.075170+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:25.075474+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:26.075642+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:27.075800+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:28.075939+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:29.076056+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:30.076242+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:31.076397+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:32.076541+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:33.076689+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:34.076826+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:35.076954+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:36.077072+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:37.077206+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:38.077373+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:39.077529+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:40.077684+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:41.077810+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:42.077975+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:43.078125+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:44.078239+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:45.078514+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:46.098480+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:47.098728+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:48.098906+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:49.099084+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:50.099366+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:51.099616+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:52.099888+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:53.100138+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:54.100317+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:55.100484+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:56.100616+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:57.100746+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:58.100882+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:59.101019+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:00.101168+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:01.101322+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:02.101488+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:03.101667+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:04.101855+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:05.102061+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:06.102229+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:07.102396+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:08.102575+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:09.102732+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:10.102957+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:11.103147+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:12.103283+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:13.103418+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:14.103540+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:15.103655+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:16.103813+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:17.104958+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:18.105115+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:19.105253+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:20.105710+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:21.106382+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:22.106506+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:23.106651+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:24.106839+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:25.107034+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:26.107199+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:27.107371+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:28.107632+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:29.107923+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:30.108233+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:31.108440+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:32.108554+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:33.108706+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:34.108895+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:35.109065+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:36.109226+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:37.109405+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:38.109556+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:39.109723+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:40.109922+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:41.110090+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:42.110299+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:43.110400+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:44.110540+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:45.110697+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:46.110845+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:47.110994+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:48.111141+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:49.111372+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:50.111667+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:51.111977+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:52.112178+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:53.112359+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:54.112467+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:55.112593+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:56.112813+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:57.112979+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:58.113131+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:59.113474+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:00.113755+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:01.113920+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:02.114218+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:03.114425+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:04.114582+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:05.114709+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:06.114876+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:07.115012+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:08.115135+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:09.115285+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:10.115445+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:11.115616+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:12.115757+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:13.115915+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:14.116025+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:15.116150+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:16.116302+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:17.116448+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:18.116589+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:19.116763+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:20.116959+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:21.117100+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:22.117243+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:23.117394+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:24.117545+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:25.117679+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:26.117793+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:27.117936+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:28.118083+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:29.118206+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:30.118414+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:31.118564+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:32.118695+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:33.118829+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:34.118915+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:35.119017+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:36.119137+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:37.119276+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:38.119417+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:39.119566+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:40.119709+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:41.119852+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:42.119993+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:43.120190+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:44.120404+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:45.120561+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:46.120683+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:47.120872+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:48.121036+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:49.121214+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:50.121381+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:51.121513+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:52.121675+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:53.121821+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:54.122047+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:55.122433+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:56.122552+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:57.122684+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:58.122805+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:59.122939+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:00.123117+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:01.123311+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:02.123585+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:03.123786+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:04.123958+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:05.124139+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:06.125523+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:07.125689+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:08.125919+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:09.126090+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:10.126289+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:11.126390+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:12.126557+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:13.126728+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:14.126857+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:15.126970+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:16.127096+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:17.127287+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:18.127394+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:19.127529+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:20.127724+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:21.127919+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:22.128031+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:23.128163+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:24.128320+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:25.128488+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:26.128645+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:27.128761+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:28.128879+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:29.128991+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:30.129149+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:31.129310+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: mgrc ms_handle_reset ms_handle_reset con 0x564464eddc00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1439243141
Oct 11 05:00:54 compute-0 ceph-osd[88467]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1439243141,v1:192.168.122.100:6801/1439243141]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: get_auth_request con 0x564467e90000 auth_method 0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: mgrc handle_mgr_configure stats_period=5
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 ms_handle_reset con 0x564466850800 session 0x564464e71860
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662eb000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:32.129462+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 999424 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:33.129605+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 999424 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:34.129750+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 999424 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:35.129879+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 999424 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:36.130012+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 999424 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:37.130137+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 991232 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:38.130238+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 991232 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:39.130400+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 991232 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:40.130604+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 991232 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:41.130740+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 991232 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:42.131071+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 983040 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:43.131247+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 983040 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:44.131394+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 983040 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:45.131504+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 983040 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:46.131647+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:47.131816+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:48.131899+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:49.132023+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:50.132172+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:51.132320+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:52.132494+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:53.132640+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:54.132757+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:55.132861+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:56.132988+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:57.133118+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:58.133254+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:59.133469+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:00.133770+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:01.133916+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:02.134163+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:03.134316+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:04.134520+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:05.135105+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:06.135914+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:07.136435+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:08.174655+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:09.174851+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:10.175115+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:11.175294+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:12.175403+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:13.175709+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:14.175901+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:15.176120+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:16.176315+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:17.176563+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:18.176721+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:19.176863+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:20.177039+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:21.177249+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:22.177543+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:23.177664+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:24.177826+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:25.178001+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:26.178137+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:27.178319+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:28.178566+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:29.178733+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:30.178954+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:31.179123+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:32.179248+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:33.179431+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:34.179549+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:35.179718+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:36.179893+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:37.180043+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:38.180207+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:39.180450+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:40.180633+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:41.180828+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:42.180990+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:43.181113+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:44.181271+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:45.181450+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:46.181597+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:47.181702+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:48.181846+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:49.182052+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:50.182256+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:51.182416+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:52.182577+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:53.182726+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:54.183555+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:55.183704+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:56.183877+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:57.184011+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:58.184169+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:59.184414+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:00.184611+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:01.184780+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:02.184940+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:03.185125+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:04.185291+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:05.185442+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:06.185587+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:07.185727+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:08.185886+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:09.186048+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:10.186226+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:11.186388+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:12.186531+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:13.186692+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:14.186867+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:15.187049+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:16.187178+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:17.187306+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:18.187427+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:19.187570+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:20.187733+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:21.187859+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:22.188010+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:23.188163+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:24.188312+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:25.188470+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:26.188625+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:27.188777+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:28.188903+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:29.189058+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:30.189511+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:31.189636+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:32.189783+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:33.189927+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:34.190060+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:35.190186+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:36.190305+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:37.190494+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:38.190689+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:39.190847+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:40.191010+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:41.191165+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:42.191444+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:43.191595+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:44.191735+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:45.191867+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:46.192000+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:47.192166+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:48.192309+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:49.192518+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:50.192732+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:51.192858+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:52.193009+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:53.193178+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:54.193459+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:55.193660+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:56.193813+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:57.193939+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:58.194081+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:59.194228+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:00.194394+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:01.194597+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:02.194828+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:03.194969+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:04.195108+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:05.195289+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:06.195436+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:07.195631+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:08.195755+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:09.195937+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:10.196193+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:11.196378+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:12.196546+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:13.196766+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:14.196962+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:15.197107+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:16.197264+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:17.197383+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:18.197515+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:19.197632+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:20.197831+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:21.197969+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:22.198125+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:23.198370+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:24.198520+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:25.198678+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:26.198846+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:27.199032+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:28.199170+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:29.199311+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:30.199532+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:31.200405+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:32.200562+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:33.200689+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:34.200814+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:35.200956+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:36.201131+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:37.201288+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:38.201448+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:39.201609+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:40.201767+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:41.201915+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:42.202071+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:43.202263+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:44.202428+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:45.202599+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:46.202751+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:47.202915+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:48.203070+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:49.203247+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:50.203479+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:51.203631+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:52.203751+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:53.203883+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:54.204019+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:55.204165+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:56.204303+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:57.204461+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:58.204602+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:59.204744+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:00.204967+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:01.205110+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:02.205273+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:03.205412+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:04.205613+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:05.205795+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:06.205967+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:07.206123+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:08.206239+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:09.206416+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:10.206660+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:11.206782+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:12.206935+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:13.207490+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:14.207594+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:15.207720+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:16.207849+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:17.207977+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:18.208038+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:19.208144+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:20.208289+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:21.208404+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:22.208516+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:23.208878+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:24.209098+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:25.209256+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:26.209411+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:27.209600+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:28.209764+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:29.209936+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:30.210101+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:31.210386+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:32.210534+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:33.210679+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:34.210816+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:35.210963+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:36.211105+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:37.211214+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:38.211428+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:39.211552+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:40.211715+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:41.211874+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:42.211981+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:43.212101+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:44.212276+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:45.212428+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:46.212548+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:47.212704+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:48.212898+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:49.213046+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:50.213282+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:51.213413+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:52.213582+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:53.213728+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:54.214055+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:55.214314+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:56.214602+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:57.214948+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:58.215409+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:59.215608+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:00.215999+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:01.216244+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:02.216429+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:03.216751+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:04.216950+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:05.217225+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:06.217485+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:07.217867+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:08.218080+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:09.218385+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:10.218708+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:11.218924+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:12.219120+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:13.219375+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:14.219632+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:15.219852+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:16.220428+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:17.221134+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 892928 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:18.221531+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 892928 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:19.222406+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 892928 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:20.222839+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 892928 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:21.223268+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 892928 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:22.223424+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 892928 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:23.223636+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 892928 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:24.223932+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 884736 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:25.224225+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 884736 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 7052 writes, 29K keys, 7052 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7052 writes, 1301 syncs, 5.42 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 273 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564464097090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5644640971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:26.224490+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:27.224797+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:28.225030+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:29.225266+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:30.225521+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:31.225744+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:32.225977+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:33.226299+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:34.226592+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:35.226901+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:36.227130+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:37.227497+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:38.227753+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:39.227976+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:40.228231+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:41.228448+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:42.228607+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:43.228759+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:44.228976+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:45.229121+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:46.229280+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:47.229459+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 835584 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:48.229626+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 835584 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:49.229755+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 835584 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:50.230431+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 835584 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:51.230546+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:52.230757+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:53.230909+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:54.231051+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:55.231180+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:56.231379+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:57.231539+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:58.231695+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:59.231802+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:00.231987+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:01.232161+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:02.232437+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:03.232579+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:04.232737+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:05.232902+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:06.233065+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:07.233244+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:08.233413+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:09.233632+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:10.233899+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:11.234070+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:12.234261+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:13.234420+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:14.234650+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:15.234817+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:16.235010+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:17.235200+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:18.235419+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:19.235593+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:20.235793+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:21.235939+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:22.236083+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:23.236239+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:24.236405+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:25.236640+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:26.236857+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 599.830261230s of 600.152160645s, submitted: 90
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:27.237027+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:28.237194+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:29.237436+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:30.237636+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:31.237820+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:32.237977+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:33.238105+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:34.238250+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:35.238389+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:36.238561+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:37.238740+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:38.238854+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:39.239007+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:40.239218+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:41.239401+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:42.239585+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:43.239723+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:44.239841+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:45.240030+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:46.240221+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:47.240401+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:48.240572+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:49.240728+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:50.240911+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:51.241074+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:52.241232+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:53.241389+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:54.241543+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:55.241723+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:56.241951+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:57.242164+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:58.242641+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:59.242779+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:00.242970+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:01.243104+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:02.243252+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:03.243379+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:04.243522+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:05.243749+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:06.243938+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:07.244137+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:08.244372+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:09.244555+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:10.244866+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:11.245051+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:12.245264+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:13.245568+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:14.245761+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:15.245961+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:16.246141+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:17.246387+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:18.247058+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:19.247241+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:20.247452+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:21.247652+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:22.247819+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:23.247984+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:24.248134+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:25.248295+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:26.248430+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:27.248597+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:28.248759+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:29.248939+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:30.249105+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:31.249248+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:32.249433+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:33.249555+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:34.249710+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:35.249846+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:36.250009+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:37.250160+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:38.250316+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:39.250499+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:40.250673+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:41.250876+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1097728 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:42.251011+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1097728 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:43.251179+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1097728 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:44.251435+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1097728 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:45.251602+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1097728 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:46.251799+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1097728 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:47.251973+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1097728 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:48.252125+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1097728 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:49.252248+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1097728 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:50.252410+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1097728 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:51.252555+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:52.252669+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:53.252829+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:54.253010+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:55.253155+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:56.253423+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:57.253979+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:58.255024+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:59.255533+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:00.255791+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:01.256692+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:02.256944+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:03.257760+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:04.258429+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:05.258945+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:06.259394+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:07.259762+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:08.259897+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:09.260028+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:10.260218+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:11.260409+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:12.260528+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:13.260747+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:14.260903+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:15.261037+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:16.261173+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:17.261307+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:18.261457+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:19.261602+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:20.261801+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:21.262033+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:22.262179+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:23.262407+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:24.262579+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:25.262754+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:26.262937+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:27.263071+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:28.263184+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:29.263391+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:30.263656+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:31.263878+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:32.264093+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:33.264363+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:34.264551+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:35.264739+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:36.264910+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:37.265105+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:38.265241+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:39.265455+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:40.265708+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:41.265995+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:42.268500+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:43.268723+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:44.268951+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:45.269188+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:46.269454+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:47.269693+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:48.269873+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:49.270077+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:50.270390+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:51.270591+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:52.270790+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:53.271027+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:54.271179+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:55.271419+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:56.271688+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:57.271829+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:58.272011+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:59.272194+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:00.272395+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:01.272567+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:02.273818+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:03.274885+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:04.275580+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:05.276395+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:06.276602+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:07.277264+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:08.277827+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:09.278487+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:10.279012+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:11.279176+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:12.279576+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:13.279973+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:14.280285+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:15.280658+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:16.280808+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:17.280982+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:18.281125+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:19.281290+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:20.281500+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:21.281702+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:22.281837+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:23.281969+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:24.282285+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:25.282545+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:26.282694+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:27.282830+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:28.283096+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:29.283299+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:30.283605+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:31.283846+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:32.284089+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:33.284389+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:34.284589+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:35.284777+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:36.284959+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:37.285235+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:38.285499+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:39.285699+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:40.285940+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:41.286129+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:42.286285+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:43.286493+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:44.286639+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:45.286827+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:46.287012+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:47.287229+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:48.287454+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:49.287680+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:50.287880+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:51.288089+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:52.288373+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:53.288594+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:54.288774+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:55.288949+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:56.289138+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:57.289322+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:58.289547+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:59.289750+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:00.289997+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:01.290167+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:02.290438+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:03.290623+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:04.290883+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:05.291133+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:06.291323+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:07.291585+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:08.291780+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:09.292061+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:10.292269+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:11.292463+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:12.292664+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:13.292831+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:14.293026+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:15.293254+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:16.293448+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:17.293630+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:18.293808+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:19.294047+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:20.294306+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842483 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:21.294581+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1081344 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x11cf40/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:22.294752+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:23.294922+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:24.295078+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1089536 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:25.295231+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466850800
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 239.403076172s of 239.700897217s, submitted: 90
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:26.295406+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848401 data_alloc: 218103808 data_used: 188416
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:27.295537+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 17653760 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 116 heartbeat osd_stat(store_statfs(0x4fc24e000/0x0/0x4ffc00000, data 0x9206b1/0x9ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 116 handle_osd_map epochs [117,117], i have 116, src has [1,117]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 117 ms_handle_reset con 0x564466850800 session 0x56446803d4a0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:28.295678+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467a4f400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 16588800 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:29.295814+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 118 ms_handle_reset con 0x564467a4f400 session 0x564467ed45a0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:30.295979+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fbdd9000/0x0/0x4ffc00000, data 0xd93de3/0xe44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:31.296124+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946426 data_alloc: 218103808 data_used: 196608
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:32.296459+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fbdd9000/0x0/0x4ffc00000, data 0xd93de3/0xe44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:33.296643+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fbdd9000/0x0/0x4ffc00000, data 0xd93de3/0xe44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:34.296820+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:35.297016+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:36.297176+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949560 data_alloc: 218103808 data_used: 200704
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:37.297435+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:38.297610+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:39.297800+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:40.298003+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:41.298147+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949560 data_alloc: 218103808 data_used: 200704
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:42.298294+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:43.298395+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:44.298551+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:45.298688+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:46.298809+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949560 data_alloc: 218103808 data_used: 200704
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:47.298979+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:48.299141+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:49.299300+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:50.299536+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:51.299719+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949560 data_alloc: 218103808 data_used: 200704
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:52.299892+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:53.300014+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:54.300138+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:55.300277+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:56.300378+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949560 data_alloc: 218103808 data_used: 200704
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:57.300526+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:58.300680+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:59.300860+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:00.301137+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:01.301321+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949560 data_alloc: 218103808 data_used: 200704
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:02.301599+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:03.301730+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0xd95846/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 16506880 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467a4f800
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.151039124s of 38.300365448s, submitted: 47
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:04.301865+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 120 ms_handle_reset con 0x564467a4f800 session 0x564467ed4960
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 16457728 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:05.301961+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 121 ms_handle_reset con 0x564466465000 session 0x5644662d7e00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 16449536 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:06.302123+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966471 data_alloc: 218103808 data_used: 212992
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 122 ms_handle_reset con 0x564466465c00 session 0x564467ed45a0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 16367616 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466850800
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:07.302383+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 123 ms_handle_reset con 0x564466850800 session 0x564468a761e0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 16277504 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467a4f400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:08.302516+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 124 ms_handle_reset con 0x564467a4f400 session 0x564468a77e00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 16220160 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fbdbc000/0x0/0x4ffc00000, data 0xd9fdea/0xe61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467a4fc00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:09.302668+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 125 ms_handle_reset con 0x564467a4fc00 session 0x564468a8da40
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16138240 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:10.302913+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fbdb7000/0x0/0x4ffc00000, data 0xda198a/0xe65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16138240 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:11.303187+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995095 data_alloc: 218103808 data_used: 221184
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16138240 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:12.303498+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16138240 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:13.303681+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 126 ms_handle_reset con 0x564466465000 session 0x564468aacd20
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 16113664 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 126 ms_handle_reset con 0x564466465c00 session 0x564468a8da40
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:14.303880+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466850800
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.721569061s of 10.287688255s, submitted: 141
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 16211968 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:15.304052+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 127 ms_handle_reset con 0x564466850800 session 0x564468aad0e0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467a4f400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 16097280 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fbdb3000/0x0/0x4ffc00000, data 0xda4279/0xe68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 128 ms_handle_reset con 0x564467a4f400 session 0x56446882e5a0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:16.304408+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 128 ms_handle_reset con 0x5644662ea000 session 0x564468ab1e00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001955 data_alloc: 218103808 data_used: 241664
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 16056320 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fbdb0000/0x0/0x4ffc00000, data 0xda5836/0xe69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 129 ms_handle_reset con 0x564466465c00 session 0x5644682490e0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466850800
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 129 ms_handle_reset con 0x564466850800 session 0x564467d361e0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:17.304777+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 129 ms_handle_reset con 0x5644662ea000 session 0x564467ed41e0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467a4f400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 24387584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:18.304902+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 130 ms_handle_reset con 0x564467a4f400 session 0x564468ab1a40
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465a70400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 24305664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 131 ms_handle_reset con 0x564465679400 session 0x56446882ef00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 131 ms_handle_reset con 0x564466465000 session 0x564467f741e0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:19.305066+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 131 ms_handle_reset con 0x564465679400 session 0x564468becf00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 131 ms_handle_reset con 0x564465a70400 session 0x564468ab0780
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 131 ms_handle_reset con 0x5644662ea000 session 0x564468bf30e0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 24264704 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:20.305295+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466850800
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 24248320 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:21.305489+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022289 data_alloc: 218103808 data_used: 258048
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 133 ms_handle_reset con 0x564466850800 session 0x564468bed0e0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 133 ms_handle_reset con 0x564466465c00 session 0x564468bf34a0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 22044672 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465a70400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:22.305657+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb999000/0x0/0x4ffc00000, data 0xdadab8/0xe72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 134 ms_handle_reset con 0x564465679400 session 0x564468c0e000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 22011904 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:23.305814+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 135 ms_handle_reset con 0x564465a70400 session 0x564468c063c0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 21946368 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:24.306054+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 21946368 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:25.306237+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.401144028s of 11.352249146s, submitted: 362
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 21905408 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:26.306401+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028392 data_alloc: 218103808 data_used: 262144
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 137 ms_handle_reset con 0x5644662ea000 session 0x564468c0e780
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 21897216 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:27.306590+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 21897216 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:28.306758+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fb98e000/0x0/0x4ffc00000, data 0xdb4954/0xe7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 21897216 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:29.306895+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fb98e000/0x0/0x4ffc00000, data 0xdb4954/0xe7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 138 ms_handle_reset con 0x564466465000 session 0x564468c0eb40
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 21897216 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:30.307049+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82935808 unmapped: 21880832 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:31.307197+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043685 data_alloc: 218103808 data_used: 274432
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 140 ms_handle_reset con 0x564465679400 session 0x564468c0f2c0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465a70400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 140 ms_handle_reset con 0x564465a70400 session 0x564468f20d20
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 21864448 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:32.307315+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 21831680 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 142 ms_handle_reset con 0x564466465c00 session 0x564467ccb4a0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 142 ms_handle_reset con 0x5644662ea000 session 0x564468f20f00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:33.307509+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467a4f400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 142 ms_handle_reset con 0x564467a4f400 session 0x564468f210e0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 21823488 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:34.307695+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 21823488 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:35.307828+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdbd38a/0xe8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 21823488 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:36.307954+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdbd38a/0xe8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051758 data_alloc: 218103808 data_used: 274432
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 21823488 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:37.308163+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 21823488 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:38.308364+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 21823488 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:39.308507+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 21823488 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:40.308683+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.345084190s of 14.652987480s, submitted: 90
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 21815296 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:41.308858+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053692 data_alloc: 218103808 data_used: 274432
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb97c000/0x0/0x4ffc00000, data 0xdbee25/0xe91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 143 ms_handle_reset con 0x564465679400 session 0x564468f214a0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 21815296 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:42.308989+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465a70400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 143 ms_handle_reset con 0x564465a70400 session 0x564468f21680
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 21815296 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:43.309116+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 143 ms_handle_reset con 0x5644662ea000 session 0x564468f21860
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 21815296 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:44.309283+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 21815296 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 145 ms_handle_reset con 0x564466465c00 session 0x564468f21a40
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:45.309516+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465a70800
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465a70000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fb979000/0x0/0x4ffc00000, data 0xdc09a2/0xe94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 21495808 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:46.309647+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065226 data_alloc: 218103808 data_used: 278528
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 21495808 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:47.309785+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 21495808 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:48.309911+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fb950000/0x0/0x4ffc00000, data 0xde6583/0xebc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 21495808 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:49.310061+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467fc2000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 21487616 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564468a88000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 146 ms_handle_reset con 0x564467fc2000 session 0x564465cfe780
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:50.310244+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb94d000/0x0/0x4ffc00000, data 0xde817a/0xec0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 21487616 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.499062538s of 10.602461815s, submitted: 55
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 147 ms_handle_reset con 0x564468a88000 session 0x564468a77a40
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:51.310378+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072974 data_alloc: 218103808 data_used: 282624
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 21487616 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467fc2000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:52.310501+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 147 ms_handle_reset con 0x564467fc2000 session 0x5644659aa780
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 21479424 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 147 ms_handle_reset con 0x564465679400 session 0x564467fe0000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465a70400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:53.310600+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xde9cf7/0xec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 21463040 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 147 ms_handle_reset con 0x564465a70400 session 0x5644675abc20
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:54.310708+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 21463040 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:55.310887+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 21716992 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xdeb886/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:56.311025+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 149 ms_handle_reset con 0x5644662ea000 session 0x5644679674a0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076434 data_alloc: 218103808 data_used: 282624
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467d5b400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 21725184 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:57.311200+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 21708800 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 150 ms_handle_reset con 0x564467d5b400 session 0x564466798000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:58.311300+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 21733376 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb93e000/0x0/0x4ffc00000, data 0xdef46e/0xece000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:59.311455+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 21733376 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:00.311611+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb93e000/0x0/0x4ffc00000, data 0xdef46e/0xece000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 21725184 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:01.311764+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088920 data_alloc: 218103808 data_used: 286720
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb93c000/0x0/0x4ffc00000, data 0xdf0f09/0xed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 21725184 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x5644662ea000 session 0x564468bf3860
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:02.311894+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.009598732s of 11.205535889s, submitted: 72
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x564465679400 session 0x56446882ed20
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465a70400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x564465a70400 session 0x564468bec960
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467fc2000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x564467fc2000 session 0x564468248f00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467fc2000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x564467fc2000 session 0x564467ed52c0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x564465679400 session 0x564468a23680
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 21741568 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x5644662ea000 session 0x564468249c20
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:03.312013+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467d5b400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x564467d5b400 session 0x564468aacf00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 21741568 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x564466465c00 session 0x564468c0f4a0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb93c000/0x0/0x4ffc00000, data 0xdf0f09/0xed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:04.312137+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x564466465c00 session 0x564468a22f00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 21438464 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:05.312416+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 21438464 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:06.312571+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093274 data_alloc: 218103808 data_used: 299008
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 21438464 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:07.312736+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb918000/0x0/0x4ffc00000, data 0xe14f19/0xef6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 21438464 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:08.312880+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467d5b400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 151 ms_handle_reset con 0x564467d5b400 session 0x5644682481e0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467fc2000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 152 ms_handle_reset con 0x564467fc2000 session 0x564465a652c0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564468be4000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564468be4400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 152 ms_handle_reset con 0x564468be4000 session 0x564467cd45a0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 152 ms_handle_reset con 0x564468be4400 session 0x56446882e1e0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564468be4400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 152 ms_handle_reset con 0x564468be4400 session 0x564467cce960
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 21446656 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:09.312991+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 152 ms_handle_reset con 0x564466465c00 session 0x56446561ef00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467d5b400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fb90f000/0x0/0x4ffc00000, data 0xe18667/0xefc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 21430272 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:10.313120+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 154 ms_handle_reset con 0x564467d5b400 session 0x56446561e3c0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 21430272 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:11.313268+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467fc2000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107114 data_alloc: 218103808 data_used: 307200
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 154 ms_handle_reset con 0x564467fc2000 session 0x5644680e8f00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564468be4000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 154 ms_handle_reset con 0x564468be4000 session 0x564467d37680
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 154 ms_handle_reset con 0x564466465c00 session 0x56446678bc20
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 21381120 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:12.313411+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467d5b400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.190019608s of 10.323541641s, submitted: 41
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 154 ms_handle_reset con 0x564467d5b400 session 0x56446678be00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 21364736 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:13.313512+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467fc2000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 155 ms_handle_reset con 0x564467fc2000 session 0x5644679365a0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 21323776 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:14.313633+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fb90f000/0x0/0x4ffc00000, data 0xe1a200/0xeff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 21323776 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 155 ms_handle_reset con 0x564465679400 session 0x564468c07860
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 155 ms_handle_reset con 0x5644662ea000 session 0x564468bf34a0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:15.313793+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _renew_subs
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 157 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 21291008 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:16.314079+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 157 ms_handle_reset con 0x564465679400 session 0x564468aad0e0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112227 data_alloc: 218103808 data_used: 315392
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 157 heartbeat osd_stat(store_statfs(0x4fb92a000/0x0/0x4ffc00000, data 0xdfb40a/0xee2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 21274624 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:17.314600+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 21250048 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:18.314999+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 157 heartbeat osd_stat(store_statfs(0x4fb92a000/0x0/0x4ffc00000, data 0xdfb40a/0xee2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 21217280 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:19.315236+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 157 ms_handle_reset con 0x564465a70800 session 0x564468f21e00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 157 ms_handle_reset con 0x564465a70000 session 0x564468201e00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 21217280 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:20.315668+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662ea000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 158 ms_handle_reset con 0x5644662ea000 session 0x564468bf3e00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:21.315992+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111632 data_alloc: 218103808 data_used: 315392
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:22.316315+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564466465c00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 158 ms_handle_reset con 0x564466465c00 session 0x564468bed4a0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465679400
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.668135643s of 10.053568840s, submitted: 175
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:23.316624+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 159 ms_handle_reset con 0x564465679400 session 0x564468aad860
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 159 heartbeat osd_stat(store_statfs(0x4fb94e000/0x0/0x4ffc00000, data 0xdd8e4d/0xebf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:24.316885+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:25.317174+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:26.317407+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113726 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 159 heartbeat osd_stat(store_statfs(0x4fb94b000/0x0/0x4ffc00000, data 0xddaa1e/0xec2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:27.317623+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:28.317824+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:29.317982+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:30.318171+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:31.318661+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113726 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:32.319299+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:33.319868+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:34.320036+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:35.320591+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:36.320981+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:37.321199+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:38.321418+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:39.321643+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:40.321871+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:41.322024+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:42.322197+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:43.322785+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:44.322907+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:45.323070+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:46.323280+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:47.323393+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:48.323538+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:49.323733+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:50.323885+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:51.324027+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:52.324209+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:53.324408+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:54.324562+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:55.324730+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:56.324911+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 21168128 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:57.325111+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:58.325232+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:59.325402+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:00.325537+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:01.325666+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:02.325813+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:03.325992+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:04.326156+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:05.326383+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:06.326541+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:07.326720+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:08.326885+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:09.327077+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:10.327278+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:11.327433+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 21159936 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:12.327620+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:13.327817+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:14.327977+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:15.328119+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:16.328324+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:17.328562+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:18.328764+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:19.328929+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:20.329433+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:21.329911+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:22.330147+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:23.330912+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:24.331686+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:25.332124+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:26.332279+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:27.332452+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:28.335494+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:29.335677+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:30.335881+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:31.336027+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:32.336162+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:33.336285+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:34.336590+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:35.336734+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:36.336853+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:37.336989+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:38.337111+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 21151744 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:39.337263+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'config diff' '{prefix=config diff}'
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'config show' '{prefix=config show}'
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 20684800 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'counter dump' '{prefix=counter dump}'
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'counter schema' '{prefix=counter schema}'
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:40.337438+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 20512768 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:41.337556+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20463616 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'log dump' '{prefix=log dump}'
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:42.337741+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20463616 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'perf dump' '{prefix=perf dump}'
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'perf schema' '{prefix=perf schema}'
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:43.337919+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:44.338036+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:45.338146+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:46.338315+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:47.338571+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:48.338696+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:49.338800+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:50.339555+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:51.339670+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:52.339792+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:53.347393+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:54.347499+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:55.347611+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:56.347763+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:57.347874+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:58.347996+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:59.348122+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:00.348286+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:01.348429+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:02.348607+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:03.348737+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:04.348896+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:05.349019+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:06.349247+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:07.349405+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:08.349581+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:09.349740+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:10.349941+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:11.350095+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:12.350254+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:13.350411+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:14.350559+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:15.350725+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:16.350873+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:17.351002+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:18.351161+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:19.351346+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:20.351554+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:21.351706+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:22.351860+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:23.352043+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:24.352235+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:25.352437+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:26.352611+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:27.352839+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:28.353008+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:29.353183+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:30.353373+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:31.353596+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:32.353825+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:33.354044+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:34.354160+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:35.354375+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:36.354535+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:37.354673+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:38.354862+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:39.355036+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:40.355230+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:41.355392+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:42.355556+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:43.355754+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:44.355921+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:45.356071+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:46.356242+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:47.356434+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:48.356625+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:49.356831+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:50.357028+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:51.357209+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:52.357417+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:53.357599+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:54.357752+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:55.357976+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:56.358086+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:57.358262+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:58.358432+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:59.358584+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:00.358848+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:01.359029+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:02.359180+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:03.359289+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:04.359505+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:05.359682+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:06.359843+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:07.359991+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:08.360128+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:09.360256+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:10.360419+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:11.360554+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:12.360755+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:13.360966+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:14.361138+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:15.361300+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:16.361605+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:17.361791+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:18.361927+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:19.362064+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:20.362256+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:21.362426+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:22.362610+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:23.362786+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:24.362921+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:25.363090+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 8982 writes, 34K keys, 8982 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8982 writes, 2130 syncs, 4.22 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1930 writes, 5011 keys, 1930 commit groups, 1.0 writes per commit group, ingest: 2.65 MB, 0.00 MB/s
                                           Interval WAL: 1930 writes, 829 syncs, 2.33 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:26.363230+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:27.363397+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:28.363550+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:29.363727+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:30.363934+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 ms_handle_reset con 0x5644658e9c00 session 0x564464e70f00
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564465a70000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 20381696 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:31.364097+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: mgrc ms_handle_reset ms_handle_reset con 0x564467e90000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1439243141
Oct 11 05:00:54 compute-0 ceph-osd[88467]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1439243141,v1:192.168.122.100:6801/1439243141]
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: get_auth_request con 0x564468be4000 auth_method 0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: mgrc handle_mgr_configure stats_period=5
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 ms_handle_reset con 0x5644662eb000 session 0x5644664561e0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x564467fba000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 ms_handle_reset con 0x5644662eb400 session 0x5644661494a0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: handle_auth_request added challenge on 0x5644662eb000
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:32.364409+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:33.364567+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:34.364746+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:35.364889+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:36.365017+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:37.365185+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:38.365371+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:39.365589+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:40.365786+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:41.365920+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:42.366094+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:43.366286+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:44.366491+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:45.366732+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:46.366932+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:47.367110+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:48.367386+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:49.367676+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:50.367943+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:51.368160+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:52.368394+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:53.368545+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:54.368665+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:55.368818+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:56.368980+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:57.369151+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:58.369302+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:59.369488+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:00.369712+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:01.369895+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:02.370069+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:03.370252+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 20291584 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:04.370393+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:05.370529+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:06.370686+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:07.370855+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:08.371022+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:09.371168+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:10.371418+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:11.371564+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:12.371713+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:13.371870+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:14.372055+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:15.372202+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:16.372417+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:17.372634+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:18.372791+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:19.372946+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:20.373124+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:21.373361+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:22.373543+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116700 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:23.373716+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:24.373892+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb948000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:25.374091+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 20307968 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:26.374234+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 243.580978394s of 243.646545410s, submitted: 28
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 20267008 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:27.374408+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115892 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:28.374567+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:29.374733+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:30.374925+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:31.375110+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:32.375268+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:33.375502+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:34.375711+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:35.375943+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:36.376137+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:37.376389+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:38.376608+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:39.376772+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:40.376938+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:41.377104+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:42.377287+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:43.377464+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:44.377659+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:45.377851+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:46.378030+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:47.378235+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:48.378498+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:49.378670+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:50.378915+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:51.379119+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:52.379289+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:53.379478+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:54.379656+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:55.379821+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:56.379977+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:57.380142+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:58.380306+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:59.380511+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:00.380721+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:01.380976+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:02.381165+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:03.381386+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:04.381554+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:05.381729+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:06.381915+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:07.382750+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:08.382931+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:09.384448+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:10.384820+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:11.385049+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:12.385713+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:13.386115+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:14.386495+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:15.386788+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:16.386938+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:17.387096+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:18.387241+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:19.387440+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:20.387728+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:21.387956+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:22.388081+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:23.388225+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:24.388402+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:25.388552+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:26.388686+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:27.388921+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:28.389058+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:29.389429+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:30.389680+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:31.389908+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:32.390118+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:33.390325+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:34.390514+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:35.390654+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:36.390840+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:37.391023+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:38.391171+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:39.391317+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:40.391565+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:41.391715+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:42.391859+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:43.392063+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:44.392256+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:45.392383+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:46.392486+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:47.392624+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:48.392769+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:49.392926+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:50.393109+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:51.393288+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:52.393459+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:53.393639+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:54.393808+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:55.393961+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:56.394128+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:57.394288+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:58.394421+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:59.394583+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:00.394803+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:01.394970+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:02.395110+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:03.395391+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:04.395543+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:05.395693+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:06.395878+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:07.396030+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:08.396205+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:09.396435+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:10.398103+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:11.400475+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:12.401233+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:13.401945+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:14.403610+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:15.404312+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:16.404621+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:17.405189+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:18.405829+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:19.406379+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:20.406769+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:21.407032+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:22.407199+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:23.407414+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:24.407586+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:25.408025+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:26.408184+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:27.408369+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:28.408633+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:29.408789+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:30.408946+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:31.409152+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:32.409358+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:33.409531+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:34.409663+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:35.409836+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:36.410046+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:37.410230+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:38.410609+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:39.410972+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:40.411171+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:41.411299+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:42.411619+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:43.411930+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:44.412179+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:45.412371+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:46.412622+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:47.412867+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:48.413046+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:49.413245+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:50.413530+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:51.413681+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:52.413834+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:53.413983+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:54.414159+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:55.414451+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:56.414659+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:57.414849+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:58.415045+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:59.415234+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:00.415405+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:01.415569+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:02.415713+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:03.415856+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:04.415996+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:05.416118+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:06.416247+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:07.416430+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:08.416559+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:09.416731+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:10.416934+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:11.417100+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:12.417258+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:13.417407+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:14.417559+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:15.417720+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:16.417888+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:17.418094+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:18.418255+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:19.418422+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0xddc481/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:20.418624+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 20209664 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:21.418784+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'config diff' '{prefix=config diff}'
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'config show' '{prefix=config show}'
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20463616 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'counter dump' '{prefix=counter dump}'
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'counter schema' '{prefix=counter schema}'
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:22.418912+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 20414464 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:54 compute-0 ceph-osd[88467]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:54 compute-0 ceph-osd[88467]: bluestore.MempoolThread(0x564464175b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115820 data_alloc: 218103808 data_used: 311296
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:23.419052+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 19890176 heap: 104816640 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: tick
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_tickets
Oct 11 05:00:54 compute-0 ceph-osd[88467]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:24.419179+0000)
Oct 11 05:00:54 compute-0 ceph-osd[88467]: do_command 'log dump' '{prefix=log dump}'
Oct 11 05:00:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 11 05:00:54 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2257998332' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 11 05:00:54 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 11 05:00:54 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2914939324' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 11 05:00:54 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:55 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3394193173' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 11 05:00:55 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1856652676' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 11 05:00:55 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3805951971' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 11 05:00:55 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2257998332' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 11 05:00:55 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2914939324' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 11 05:00:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 11 05:00:55 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1020735172' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 11 05:00:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:00:55 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 11 05:00:55 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2810183421' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 11 05:00:55 compute-0 rsyslogd[1004]: imjournal from <np0005480869:ceph-osd>: begin to drop messages due to rate-limiting
Oct 11 05:00:55 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15149 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 05:00:55 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15151 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:55 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15153 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 05:00:56 compute-0 ceph-mon[74243]: pgmap v1200: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:56 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1020735172' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 11 05:00:56 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2810183421' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 11 05:00:56 compute-0 ceph-mon[74243]: from='client.15149 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 05:00:56 compute-0 ceph-mon[74243]: from='client.15151 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15155 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Optimize plan auto_2025-10-11_05:00:56
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [balancer INFO root] do_upmap
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'volumes', '.mgr', 'backups', 'images']
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15157 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15161 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:56 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15165 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 05:00:56 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 11 05:00:56 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3147340323' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 11 05:00:57 compute-0 ceph-mon[74243]: from='client.15153 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 05:00:57 compute-0 ceph-mon[74243]: from='client.15155 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:57 compute-0 ceph-mon[74243]: from='client.15157 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 05:00:57 compute-0 ceph-mon[74243]: from='client.15161 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 05:00:57 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3147340323' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 11 05:00:57 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15167 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 05:00:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 11 05:00:57 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4246576644' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 11 05:00:57 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15171 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 05:00:57 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 11 05:00:57 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4046325957' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 11 05:00:58 compute-0 ceph-mon[74243]: pgmap v1201: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:58 compute-0 ceph-mon[74243]: from='client.15165 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 05:00:58 compute-0 ceph-mon[74243]: from='client.15167 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 05:00:58 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4246576644' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 11 05:00:58 compute-0 ceph-mon[74243]: from='client.15171 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 05:00:58 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/4046325957' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 11 05:00:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 11 05:00:58 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2522748930' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 11 05:00:58 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 11 05:00:58 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 11 05:00:58 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 11 05:00:58 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2727196841' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 1875968 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:18.912796+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:48.307172+0000 osd.0 (osd.0) 86 : cluster [DBG] 10.16 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:48.321246+0000 osd.0 (osd.0) 87 : cluster [DBG] 10.16 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 87) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:48.307172+0000 osd.0 (osd.0) 86 : cluster [DBG] 10.16 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:48.321246+0000 osd.0 (osd.0) 87 : cluster [DBG] 10.16 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 1867776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:19.913044+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 1867776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:20.913191+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 785539 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69402624 unmapped: 1843200 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:21.913345+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.807960510s of 10.886000633s, submitted: 10
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69410816 unmapped: 1835008 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:22.913510+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:52.229910+0000 osd.0 (osd.0) 88 : cluster [DBG] 2.11 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:52.244009+0000 osd.0 (osd.0) 89 : cluster [DBG] 2.11 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 89) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:52.229910+0000 osd.0 (osd.0) 88 : cluster [DBG] 2.11 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:52.244009+0000 osd.0 (osd.0) 89 : cluster [DBG] 2.11 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.e deep-scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 10.e deep-scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 1810432 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:23.913781+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:53.260699+0000 osd.0 (osd.0) 90 : cluster [DBG] 10.e deep-scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:53.278380+0000 osd.0 (osd.0) 91 : cluster [DBG] 10.e deep-scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 91) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:53.260699+0000 osd.0 (osd.0) 90 : cluster [DBG] 10.e deep-scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:53.278380+0000 osd.0 (osd.0) 91 : cluster [DBG] 10.e deep-scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 1794048 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:24.914233+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:54.278144+0000 osd.0 (osd.0) 92 : cluster [DBG] 3.1f scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:54.292289+0000 osd.0 (osd.0) 93 : cluster [DBG] 3.1f scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 93) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:54.278144+0000 osd.0 (osd.0) 92 : cluster [DBG] 3.1f scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:54.292289+0000 osd.0 (osd.0) 93 : cluster [DBG] 3.1f scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 1785856 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:25.914429+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.1b deep-scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.1b deep-scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 790131 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 1736704 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:26.914586+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:56.253689+0000 osd.0 (osd.0) 94 : cluster [DBG] 7.1b deep-scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:56.267757+0000 osd.0 (osd.0) 95 : cluster [DBG] 7.1b deep-scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 95) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:56.253689+0000 osd.0 (osd.0) 94 : cluster [DBG] 7.1b deep-scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:56.267757+0000 osd.0 (osd.0) 95 : cluster [DBG] 7.1b deep-scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69492736 unmapped: 1753088 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:27.914813+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 1744896 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:28.914971+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:58.211814+0000 osd.0 (osd.0) 96 : cluster [DBG] 11.17 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:29:58.225893+0000 osd.0 (osd.0) 97 : cluster [DBG] 11.17 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 97) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:58.211814+0000 osd.0 (osd.0) 96 : cluster [DBG] 11.17 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:29:58.225893+0000 osd.0 (osd.0) 97 : cluster [DBG] 11.17 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 1744896 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:29.915247+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 1736704 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:30.915414+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 791280 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 1736704 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:31.915596+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 1728512 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:32.915811+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 1728512 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:33.915997+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.14 deep-scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.908695221s of 11.969345093s, submitted: 10
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.14 deep-scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 1720320 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:34.916188+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:04.199203+0000 osd.0 (osd.0) 98 : cluster [DBG] 11.14 deep-scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:04.213270+0000 osd.0 (osd.0) 99 : cluster [DBG] 11.14 deep-scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.18 deep-scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.18 deep-scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 99) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:04.199203+0000 osd.0 (osd.0) 98 : cluster [DBG] 11.14 deep-scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:04.213270+0000 osd.0 (osd.0) 99 : cluster [DBG] 11.14 deep-scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 1712128 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:35.929701+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:05.216111+0000 osd.0 (osd.0) 100 : cluster [DBG] 7.18 deep-scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:05.230182+0000 osd.0 (osd.0) 101 : cluster [DBG] 7.18 deep-scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 101) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:05.216111+0000 osd.0 (osd.0) 100 : cluster [DBG] 7.18 deep-scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:05.230182+0000 osd.0 (osd.0) 101 : cluster [DBG] 7.18 deep-scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 793577 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 1712128 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:36.929893+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 1703936 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:37.930043+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 1703936 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:38.930172+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 1687552 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:39.930315+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:09.198404+0000 osd.0 (osd.0) 102 : cluster [DBG] 3.1b scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:09.212523+0000 osd.0 (osd.0) 103 : cluster [DBG] 3.1b scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 103) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:09.198404+0000 osd.0 (osd.0) 102 : cluster [DBG] 3.1b scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:09.212523+0000 osd.0 (osd.0) 103 : cluster [DBG] 3.1b scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69566464 unmapped: 1679360 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:40.931180+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:10.238747+0000 osd.0 (osd.0) 104 : cluster [DBG] 7.1f scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:10.252864+0000 osd.0 (osd.0) 105 : cluster [DBG] 7.1f scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 105) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:10.238747+0000 osd.0 (osd.0) 104 : cluster [DBG] 7.1f scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:10.252864+0000 osd.0 (osd.0) 105 : cluster [DBG] 7.1f scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 795873 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1662976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:41.931431+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1654784 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:42.931684+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1654784 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:43.931859+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 1646592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:44.932013+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.077273369s of 11.106318474s, submitted: 8
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 1646592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:45.932153+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:15.305564+0000 osd.0 (osd.0) 106 : cluster [DBG] 8.10 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:15.319626+0000 osd.0 (osd.0) 107 : cluster [DBG] 8.10 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 107) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:15.305564+0000 osd.0 (osd.0) 106 : cluster [DBG] 8.10 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:15.319626+0000 osd.0 (osd.0) 107 : cluster [DBG] 8.10 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 797021 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 1638400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:46.932373+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 1630208 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:47.932496+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1622016 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:48.932637+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1613824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:49.932757+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1613824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:50.932876+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 797021 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1613824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:51.933019+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1589248 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:52.933161+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:22.367465+0000 osd.0 (osd.0) 108 : cluster [DBG] 8.14 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:22.381712+0000 osd.0 (osd.0) 109 : cluster [DBG] 8.14 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 109) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:22.367465+0000 osd.0 (osd.0) 108 : cluster [DBG] 8.14 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:22.381712+0000 osd.0 (osd.0) 109 : cluster [DBG] 8.14 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 1581056 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:53.933353+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69672960 unmapped: 1572864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:54.933531+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:24.387182+0000 osd.0 (osd.0) 110 : cluster [DBG] 11.10 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:24.401308+0000 osd.0 (osd.0) 111 : cluster [DBG] 11.10 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 111) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:24.387182+0000 osd.0 (osd.0) 110 : cluster [DBG] 11.10 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:24.401308+0000 osd.0 (osd.0) 111 : cluster [DBG] 11.10 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69672960 unmapped: 1572864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:55.933696+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 799318 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 1548288 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:56.933837+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 1556480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:57.934030+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.037799835s of 13.070416451s, submitted: 6
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 1556480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:58.934252+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:28.376007+0000 osd.0 (osd.0) 112 : cluster [DBG] 7.3 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:28.390101+0000 osd.0 (osd.0) 113 : cluster [DBG] 7.3 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 113) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:28.376007+0000 osd.0 (osd.0) 112 : cluster [DBG] 7.3 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:28.390101+0000 osd.0 (osd.0) 113 : cluster [DBG] 7.3 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 1548288 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:59.934544+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 1548288 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:00.934677+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.f scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.f scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 801613 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 1523712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:01.934762+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:31.385159+0000 osd.0 (osd.0) 114 : cluster [DBG] 11.f scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:31.399354+0000 osd.0 (osd.0) 115 : cluster [DBG] 11.f scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 115) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:31.385159+0000 osd.0 (osd.0) 114 : cluster [DBG] 11.f scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:31.399354+0000 osd.0 (osd.0) 115 : cluster [DBG] 11.f scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 1523712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:02.934926+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 1515520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:03.935115+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.c scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.c scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 1515520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:04.935326+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:34.453083+0000 osd.0 (osd.0) 116 : cluster [DBG] 8.c scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:34.467174+0000 osd.0 (osd.0) 117 : cluster [DBG] 8.c scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 117) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:34.453083+0000 osd.0 (osd.0) 116 : cluster [DBG] 8.c scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:34.467174+0000 osd.0 (osd.0) 117 : cluster [DBG] 8.c scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 1515520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:05.935601+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.f deep-scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.f deep-scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 803907 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 1499136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:06.935728+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:36.464690+0000 osd.0 (osd.0) 118 : cluster [DBG] 7.f deep-scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:36.478775+0000 osd.0 (osd.0) 119 : cluster [DBG] 7.f deep-scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 119) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:36.464690+0000 osd.0 (osd.0) 118 : cluster [DBG] 7.f deep-scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:36.478775+0000 osd.0 (osd.0) 119 : cluster [DBG] 7.f deep-scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 1499136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:07.935923+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 1490944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:08.936039+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.f scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.068527222s of 11.096608162s, submitted: 8
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.f scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1482752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:09.936194+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:39.472666+0000 osd.0 (osd.0) 120 : cluster [DBG] 8.f scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:39.493829+0000 osd.0 (osd.0) 121 : cluster [DBG] 8.f scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 121) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:39.472666+0000 osd.0 (osd.0) 120 : cluster [DBG] 8.f scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:39.493829+0000 osd.0 (osd.0) 121 : cluster [DBG] 8.f scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1482752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:10.936381+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805054 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1474560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:11.936523+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1466368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:12.936640+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1458176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:13.936790+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1458176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:14.936948+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1458176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:15.937101+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 806201 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 1441792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:16.937261+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:46.535968+0000 osd.0 (osd.0) 122 : cluster [DBG] 3.a scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:46.549970+0000 osd.0 (osd.0) 123 : cluster [DBG] 3.a scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 123) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:46.535968+0000 osd.0 (osd.0) 122 : cluster [DBG] 3.a scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:46.549970+0000 osd.0 (osd.0) 123 : cluster [DBG] 3.a scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 1441792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:17.937437+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1433600 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:18.937591+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1433600 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:19.937740+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1433600 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:20.937942+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 806201 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1409024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:21.938115+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1409024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:22.938281+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1400832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:23.938444+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1400832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:24.938625+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.082592010s of 16.120431900s, submitted: 4
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1392640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:25.938815+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:55.593107+0000 osd.0 (osd.0) 124 : cluster [DBG] 7.4 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:30:55.607214+0000 osd.0 (osd.0) 125 : cluster [DBG] 7.4 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 125) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:55.593107+0000 osd.0 (osd.0) 124 : cluster [DBG] 7.4 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:30:55.607214+0000 osd.0 (osd.0) 125 : cluster [DBG] 7.4 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807348 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1392640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:26.939025+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1392640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:27.939163+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1384448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:28.939369+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1376256 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:29.939491+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 1368064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:30.939625+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.e scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.e scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 808496 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1376256 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:31.939928+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:01.552003+0000 osd.0 (osd.0) 126 : cluster [DBG] 11.e scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:01.566133+0000 osd.0 (osd.0) 127 : cluster [DBG] 11.e scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.b scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 127) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:01.552003+0000 osd.0 (osd.0) 126 : cluster [DBG] 11.e scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:01.566133+0000 osd.0 (osd.0) 127 : cluster [DBG] 11.e scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.b scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1376256 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:32.940147+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:02.511383+0000 osd.0 (osd.0) 128 : cluster [DBG] 8.b scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:02.525386+0000 osd.0 (osd.0) 129 : cluster [DBG] 8.b scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 129) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:02.511383+0000 osd.0 (osd.0) 128 : cluster [DBG] 8.b scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:02.525386+0000 osd.0 (osd.0) 129 : cluster [DBG] 8.b scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 1368064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:33.940307+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 1368064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:34.940474+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 1359872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:35.940624+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:05.487609+0000 osd.0 (osd.0) 130 : cluster [DBG] 7.6 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:05.501694+0000 osd.0 (osd.0) 131 : cluster [DBG] 7.6 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 131) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:05.487609+0000 osd.0 (osd.0) 130 : cluster [DBG] 7.6 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:05.501694+0000 osd.0 (osd.0) 131 : cluster [DBG] 7.6 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810790 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 1359872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:36.940885+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 1351680 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:37.941043+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 1351680 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:38.941203+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 1351680 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:39.941422+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.951458931s of 14.992031097s, submitted: 8
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 1343488 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:40.941579+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:10.585225+0000 osd.0 (osd.0) 132 : cluster [DBG] 8.9 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:10.599380+0000 osd.0 (osd.0) 133 : cluster [DBG] 8.9 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 133) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:10.585225+0000 osd.0 (osd.0) 132 : cluster [DBG] 8.9 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:10.599380+0000 osd.0 (osd.0) 133 : cluster [DBG] 8.9 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.e scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.e scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 813084 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1474560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:41.941757+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:11.628914+0000 osd.0 (osd.0) 134 : cluster [DBG] 8.e scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:11.643028+0000 osd.0 (osd.0) 135 : cluster [DBG] 8.e scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 135) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:11.628914+0000 osd.0 (osd.0) 134 : cluster [DBG] 8.e scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:11.643028+0000 osd.0 (osd.0) 135 : cluster [DBG] 8.e scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1474560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:42.941955+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1474560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:43.942971+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1474560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:44.943830+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:14.693587+0000 osd.0 (osd.0) 136 : cluster [DBG] 11.1 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:14.707563+0000 osd.0 (osd.0) 137 : cluster [DBG] 11.1 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 137) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:14.693587+0000 osd.0 (osd.0) 136 : cluster [DBG] 11.1 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:14.707563+0000 osd.0 (osd.0) 137 : cluster [DBG] 11.1 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1466368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:45.944680+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 814232 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1466368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:46.944972+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1466368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:47.945452+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1458176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:48.945880+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:18.647418+0000 osd.0 (osd.0) 138 : cluster [DBG] 11.4 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:18.665033+0000 osd.0 (osd.0) 139 : cluster [DBG] 11.4 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 139) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:18.647418+0000 osd.0 (osd.0) 138 : cluster [DBG] 11.4 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:18.665033+0000 osd.0 (osd.0) 139 : cluster [DBG] 11.4 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.c scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.c scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1458176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:49.946179+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:19.618737+0000 osd.0 (osd.0) 140 : cluster [DBG] 3.c scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:19.632829+0000 osd.0 (osd.0) 141 : cluster [DBG] 3.c scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 141) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:19.618737+0000 osd.0 (osd.0) 140 : cluster [DBG] 3.c scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:19.632829+0000 osd.0 (osd.0) 141 : cluster [DBG] 3.c scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.969653130s of 10.008518219s, submitted: 10
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69795840 unmapped: 1449984 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:50.946923+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:20.593658+0000 osd.0 (osd.0) 142 : cluster [DBG] 7.9 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:20.607683+0000 osd.0 (osd.0) 143 : cluster [DBG] 7.9 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 143) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:20.593658+0000 osd.0 (osd.0) 142 : cluster [DBG] 7.9 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:20.607683+0000 osd.0 (osd.0) 143 : cluster [DBG] 7.9 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 818821 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 1441792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:51.947424+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:21.591742+0000 osd.0 (osd.0) 144 : cluster [DBG] 8.6 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:21.605843+0000 osd.0 (osd.0) 145 : cluster [DBG] 8.6 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 145) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:21.591742+0000 osd.0 (osd.0) 144 : cluster [DBG] 8.6 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:21.605843+0000 osd.0 (osd.0) 145 : cluster [DBG] 8.6 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1433600 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:52.947733+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1433600 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:53.947954+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69820416 unmapped: 1425408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:54.948253+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1417216 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:55.948416+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:25.508199+0000 osd.0 (osd.0) 146 : cluster [DBG] 11.6 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:25.522400+0000 osd.0 (osd.0) 147 : cluster [DBG] 11.6 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 147) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:25.508199+0000 osd.0 (osd.0) 146 : cluster [DBG] 11.6 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:25.522400+0000 osd.0 (osd.0) 147 : cluster [DBG] 11.6 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 821116 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1409024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:56.949437+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:26.479455+0000 osd.0 (osd.0) 148 : cluster [DBG] 3.f scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:26.493404+0000 osd.0 (osd.0) 149 : cluster [DBG] 3.f scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 149) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:26.479455+0000 osd.0 (osd.0) 148 : cluster [DBG] 3.f scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:26.493404+0000 osd.0 (osd.0) 149 : cluster [DBG] 3.f scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1400832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:57.949627+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1400832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:58.949789+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:30:59.949957+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1392640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:00.950132+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1392640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 821116 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:01.950254+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1392640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:02.950469+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1384448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.036251068s of 13.068947792s, submitted: 8
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:03.950634+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:33.662724+0000 osd.0 (osd.0) 150 : cluster [DBG] 8.1a scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:33.680385+0000 osd.0 (osd.0) 151 : cluster [DBG] 8.1a scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1384448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 151) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:33.662724+0000 osd.0 (osd.0) 150 : cluster [DBG] 8.1a scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:33.680385+0000 osd.0 (osd.0) 151 : cluster [DBG] 8.1a scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:04.950936+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1376256 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:05.951096+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1376256 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 823412 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:06.951245+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:36.742184+0000 osd.0 (osd.0) 152 : cluster [DBG] 3.12 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:36.756435+0000 osd.0 (osd.0) 153 : cluster [DBG] 3.12 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 1368064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 153) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:36.742184+0000 osd.0 (osd.0) 152 : cluster [DBG] 3.12 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:36.756435+0000 osd.0 (osd.0) 153 : cluster [DBG] 3.12 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:07.951425+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:37.708599+0000 osd.0 (osd.0) 154 : cluster [DBG] 8.1f scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:37.722692+0000 osd.0 (osd.0) 155 : cluster [DBG] 8.1f scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 1368064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 155) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:37.708599+0000 osd.0 (osd.0) 154 : cluster [DBG] 8.1f scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:37.722692+0000 osd.0 (osd.0) 155 : cluster [DBG] 8.1f scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:08.951617+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 1359872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:09.951816+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:39.762498+0000 osd.0 (osd.0) 156 : cluster [DBG] 3.15 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:39.776459+0000 osd.0 (osd.0) 157 : cluster [DBG] 3.15 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 1359872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 157) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:39.762498+0000 osd.0 (osd.0) 156 : cluster [DBG] 3.15 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:39.776459+0000 osd.0 (osd.0) 157 : cluster [DBG] 3.15 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:10.951973+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:40.741830+0000 osd.0 (osd.0) 158 : cluster [DBG] 11.19 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:40.755862+0000 osd.0 (osd.0) 159 : cluster [DBG] 11.19 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 1359872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 159) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:40.741830+0000 osd.0 (osd.0) 158 : cluster [DBG] 11.19 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:40.755862+0000 osd.0 (osd.0) 159 : cluster [DBG] 11.19 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828004 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:11.952207+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:41.755124+0000 osd.0 (osd.0) 160 : cluster [DBG] 3.9 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:41.769274+0000 osd.0 (osd.0) 161 : cluster [DBG] 3.9 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 1343488 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 161) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:41.755124+0000 osd.0 (osd.0) 160 : cluster [DBG] 3.9 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:41.769274+0000 osd.0 (osd.0) 161 : cluster [DBG] 3.9 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:12.952444+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 1343488 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:13.952665+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 1335296 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.007108688s of 11.065402985s, submitted: 12
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:14.952910+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:44.728265+0000 osd.0 (osd.0) 162 : cluster [DBG] 7.13 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:44.742248+0000 osd.0 (osd.0) 163 : cluster [DBG] 7.13 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 1327104 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 163) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:44.728265+0000 osd.0 (osd.0) 162 : cluster [DBG] 7.13 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:44.742248+0000 osd.0 (osd.0) 163 : cluster [DBG] 7.13 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:15.953133+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:45.727201+0000 osd.0 (osd.0) 164 : cluster [DBG] 3.17 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:45.741165+0000 osd.0 (osd.0) 165 : cluster [DBG] 3.17 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 1327104 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 165) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:45.727201+0000 osd.0 (osd.0) 164 : cluster [DBG] 3.17 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:45.741165+0000 osd.0 (osd.0) 165 : cluster [DBG] 3.17 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 830300 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:16.953387+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1318912 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:17.953570+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1318912 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:18.953710+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:48.680080+0000 osd.0 (osd.0) 166 : cluster [DBG] 8.18 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:48.694189+0000 osd.0 (osd.0) 167 : cluster [DBG] 8.18 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1318912 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 167) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:48.680080+0000 osd.0 (osd.0) 166 : cluster [DBG] 8.18 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:48.694189+0000 osd.0 (osd.0) 167 : cluster [DBG] 8.18 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:19.953904+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 1310720 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:20.954085+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:50.651767+0000 osd.0 (osd.0) 168 : cluster [DBG] 8.1d scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:50.665845+0000 osd.0 (osd.0) 169 : cluster [DBG] 8.1d scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1294336 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 169) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:50.651767+0000 osd.0 (osd.0) 168 : cluster [DBG] 8.1d scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:50.665845+0000 osd.0 (osd.0) 169 : cluster [DBG] 8.1d scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 832596 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:21.954267+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1294336 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:22.954426+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1294336 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:23.954589+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:53.667612+0000 osd.0 (osd.0) 170 : cluster [DBG] 9.1b scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:53.692299+0000 osd.0 (osd.0) 171 : cluster [DBG] 9.1b scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 1286144 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 171) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:53.667612+0000 osd.0 (osd.0) 170 : cluster [DBG] 9.1b scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:53.692299+0000 osd.0 (osd.0) 171 : cluster [DBG] 9.1b scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:24.954831+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 1286144 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:25.954990+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69967872 unmapped: 1277952 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.920932770s of 11.958658218s, submitted: 10
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 834891 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:26.955147+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:56.686998+0000 osd.0 (osd.0) 172 : cluster [DBG] 9.3 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:56.732889+0000 osd.0 (osd.0) 173 : cluster [DBG] 9.3 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69976064 unmapped: 1269760 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 173) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:56.686998+0000 osd.0 (osd.0) 172 : cluster [DBG] 9.3 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:56.732889+0000 osd.0 (osd.0) 173 : cluster [DBG] 9.3 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.d scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.d scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:27.955352+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:57.720617+0000 osd.0 (osd.0) 174 : cluster [DBG] 9.d scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:31:57.763077+0000 osd.0 (osd.0) 175 : cluster [DBG] 9.d scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69976064 unmapped: 1269760 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 175) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:57.720617+0000 osd.0 (osd.0) 174 : cluster [DBG] 9.d scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:31:57.763077+0000 osd.0 (osd.0) 175 : cluster [DBG] 9.d scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:28.955572+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 1261568 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:29.955732+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 1261568 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:30.955918+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 1253376 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837185 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:31.956092+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:01.725453+0000 osd.0 (osd.0) 176 : cluster [DBG] 9.5 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:01.764279+0000 osd.0 (osd.0) 177 : cluster [DBG] 9.5 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 1261568 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 177) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:01.725453+0000 osd.0 (osd.0) 176 : cluster [DBG] 9.5 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:01.764279+0000 osd.0 (osd.0) 177 : cluster [DBG] 9.5 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:32.956392+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 1253376 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.b scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.b scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:33.956574+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:03.642088+0000 osd.0 (osd.0) 178 : cluster [DBG] 9.b scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:03.673851+0000 osd.0 (osd.0) 179 : cluster [DBG] 9.b scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 1253376 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 179) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:03.642088+0000 osd.0 (osd.0) 178 : cluster [DBG] 9.b scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:03.673851+0000 osd.0 (osd.0) 179 : cluster [DBG] 9.b scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:34.956733+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 1253376 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:35.956894+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:05.672094+0000 osd.0 (osd.0) 180 : cluster [DBG] 9.11 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:05.707386+0000 osd.0 (osd.0) 181 : cluster [DBG] 9.11 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70008832 unmapped: 1236992 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 181) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:05.672094+0000 osd.0 (osd.0) 180 : cluster [DBG] 9.11 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:05.707386+0000 osd.0 (osd.0) 181 : cluster [DBG] 9.11 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 839480 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:36.957104+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:37.957251+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70008832 unmapped: 1236992 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:38.957451+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.931448936s of 12.969204903s, submitted: 10
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:39.957627+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:09.656146+0000 osd.0 (osd.0) 182 : cluster [DBG] 6.3 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:09.677376+0000 osd.0 (osd.0) 183 : cluster [DBG] 6.3 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70025216 unmapped: 1220608 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 183) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:09.656146+0000 osd.0 (osd.0) 182 : cluster [DBG] 6.3 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:09.677376+0000 osd.0 (osd.0) 183 : cluster [DBG] 6.3 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:40.957805+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841774 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:41.957954+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:11.632244+0000 osd.0 (osd.0) 184 : cluster [DBG] 9.9 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:11.670988+0000 osd.0 (osd.0) 185 : cluster [DBG] 9.9 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 185) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:11.632244+0000 osd.0 (osd.0) 184 : cluster [DBG] 9.9 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:11.670988+0000 osd.0 (osd.0) 185 : cluster [DBG] 9.9 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:42.958276+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:12.662375+0000 osd.0 (osd.0) 186 : cluster [DBG] 6.7 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:12.680030+0000 osd.0 (osd.0) 187 : cluster [DBG] 6.7 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 1204224 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 187) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:12.662375+0000 osd.0 (osd.0) 186 : cluster [DBG] 6.7 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:12.680030+0000 osd.0 (osd.0) 187 : cluster [DBG] 6.7 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:43.958577+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:13.702674+0000 osd.0 (osd.0) 188 : cluster [DBG] 9.1d scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:13.737700+0000 osd.0 (osd.0) 189 : cluster [DBG] 9.1d scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 1196032 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 189) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:13.702674+0000 osd.0 (osd.0) 188 : cluster [DBG] 9.1d scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:13.737700+0000 osd.0 (osd.0) 189 : cluster [DBG] 9.1d scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:44.958836+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1187840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:45.959000+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1187840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 844069 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:46.959158+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1187840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:47.959296+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:48.959515+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:49.959643+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 1171456 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:50.959764+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 1171456 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 844069 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:51.959932+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:52.960076+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:53.960204+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:54.960415+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 1155072 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.010532379s of 16.039787292s, submitted: 8
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:55.960648+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:25.695836+0000 osd.0 (osd.0) 190 : cluster [DBG] 9.1 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:25.738294+0000 osd.0 (osd.0) 191 : cluster [DBG] 9.1 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 1155072 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 191) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:25.695836+0000 osd.0 (osd.0) 190 : cluster [DBG] 9.1 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:25.738294+0000 osd.0 (osd.0) 191 : cluster [DBG] 9.1 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:56.961009+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:26.737775+0000 osd.0 (osd.0) 192 : cluster [DBG] 6.5 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:26.758951+0000 osd.0 (osd.0) 193 : cluster [DBG] 6.5 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 846363 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 193) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:26.737775+0000 osd.0 (osd.0) 192 : cluster [DBG] 6.5 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:26.758951+0000 osd.0 (osd.0) 193 : cluster [DBG] 6.5 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:57.961234+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:58.961373+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:28.742344+0000 osd.0 (osd.0) 194 : cluster [DBG] 6.9 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:28.756372+0000 osd.0 (osd.0) 195 : cluster [DBG] 6.9 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 1122304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 195) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:28.742344+0000 osd.0 (osd.0) 194 : cluster [DBG] 6.9 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:28.756372+0000 osd.0 (osd.0) 195 : cluster [DBG] 6.9 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:31:59.961630+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 1122304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:00.961773+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:01.961982+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847510 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 1122304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:02.962147+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 1122304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:03.962385+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:04.962557+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.a deep-scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 6.a deep-scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:05.962719+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:35.681245+0000 osd.0 (osd.0) 196 : cluster [DBG] 6.a deep-scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:35.695355+0000 osd.0 (osd.0) 197 : cluster [DBG] 6.a deep-scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1097728 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 197) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:35.681245+0000 osd.0 (osd.0) 196 : cluster [DBG] 6.a deep-scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:35.695355+0000 osd.0 (osd.0) 197 : cluster [DBG] 6.a deep-scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:06.962907+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848657 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1097728 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.949511528s of 11.978895187s, submitted: 8
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:07.963064+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:37.675314+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.16 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:37.710192+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.16 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 199) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:37.675314+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.16 scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:37.710192+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.16 scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:08.963210+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:38.655554+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.1c scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:38.694373+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.1c scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 201) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:38.655554+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.1c scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:38.694373+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.1c scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:09.963705+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:39.654701+0000 osd.0 (osd.0) 202 : cluster [DBG] 9.1e scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  will send 2025-10-11T04:32:39.690057+0000 osd.0 (osd.0) 203 : cluster [DBG] 9.1e scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client handle_log_ack log(last 203) v1
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:39.654701+0000 osd.0 (osd.0) 202 : cluster [DBG] 9.1e scrub starts
Oct 11 05:00:58 compute-0 ceph-osd[87458]: log_client  logged 2025-10-11T04:32:39.690057+0000 osd.0 (osd.0) 203 : cluster [DBG] 9.1e scrub ok
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:10.963898+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1073152 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:11.964077+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1073152 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:12.964266+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:13.964431+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:14.964646+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:15.964794+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 1056768 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:16.964982+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 1056768 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:17.965141+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:18.965312+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:19.965460+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:20.965645+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:21.965843+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:22.965983+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:23.966112+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:24.966262+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:25.966448+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:26.966583+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:27.966711+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:28.966853+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:29.967065+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:30.967211+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:31.967365+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:32.967577+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:33.967790+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:34.968033+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:35.968240+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:36.968483+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:37.968682+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:38.969547+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:39.969717+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:40.969924+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:41.970134+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:42.970315+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:43.970526+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 925696 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:44.970790+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 925696 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:45.970977+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 925696 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:46.971124+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:47.971271+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:48.971432+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 892928 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:49.971659+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 892928 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:50.971816+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 884736 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:51.972018+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 876544 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:52.972196+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 876544 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:53.972401+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70377472 unmapped: 868352 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:54.972735+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70377472 unmapped: 868352 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:55.972920+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 860160 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:56.973057+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 851968 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:57.973223+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 851968 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:58.973425+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:32:59.973607+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:00.973788+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 835584 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:01.973961+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 827392 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:02.974148+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 819200 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:03.974412+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 819200 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:04.974587+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 819200 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:05.974748+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 811008 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:06.974904+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 811008 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:07.975046+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 802816 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:08.975168+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 802816 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:09.975311+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 802816 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:10.975471+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 794624 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:11.975615+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 770048 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:12.975738+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 770048 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:13.975897+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 770048 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:14.976080+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 761856 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:15.976243+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 761856 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:16.976408+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 761856 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:17.976518+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70492160 unmapped: 753664 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:18.976705+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70492160 unmapped: 753664 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:19.976882+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70500352 unmapped: 745472 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:20.977018+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70500352 unmapped: 745472 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:21.977187+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 729088 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:22.977366+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 729088 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:23.977493+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 729088 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:24.977675+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 729088 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:25.977827+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 729088 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:26.978008+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 729088 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:27.978158+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70524928 unmapped: 720896 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:28.978282+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70524928 unmapped: 720896 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:29.978398+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 712704 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:30.978542+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 712704 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:31.978698+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70557696 unmapped: 688128 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:32.978861+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70557696 unmapped: 688128 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:33.978987+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70557696 unmapped: 688128 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:34.979747+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 679936 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:35.979889+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 679936 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:36.980036+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 679936 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:37.980205+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70574080 unmapped: 671744 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:38.980353+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70574080 unmapped: 671744 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:39.980482+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70582272 unmapped: 663552 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:40.980614+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70582272 unmapped: 663552 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:41.980763+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 655360 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:42.980929+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 655360 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:43.981111+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 655360 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:44.981424+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 647168 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:45.981575+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 647168 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:46.981719+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:47.981866+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 630784 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:48.982002+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 630784 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:49.982180+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 630784 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:50.982359+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:51.982516+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 614400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:52.982709+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 614400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:53.982845+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 614400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:54.982997+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 606208 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:55.983144+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 606208 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:56.983321+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 598016 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:57.983504+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:58.983774+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:33:59.983935+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 581632 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:00.984136+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 581632 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:01.984296+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 581632 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:02.984377+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:03.984553+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:04.984744+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:05.984875+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 565248 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:06.985040+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:07.985320+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 540672 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:08.985608+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 540672 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:09.985769+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 540672 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:10.985970+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:11.986182+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:12.986347+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 524288 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:13.986562+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 524288 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:14.986787+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 524288 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:15.986966+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:16.987150+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 499712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:17.987313+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:18.987498+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:19.987670+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:20.987914+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:21.988042+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:22.988169+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 475136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:23.988318+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 475136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:24.988565+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:25.988735+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 450560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:26.988870+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 450560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:27.988985+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:28.989108+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:29.989244+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:30.989402+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:31.989552+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:32.989694+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 425984 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:33.989836+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 425984 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:34.989994+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 425984 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:35.990143+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 417792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:36.990308+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 417792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:37.990428+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 417792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:38.990580+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 409600 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:39.990743+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 409600 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:40.990861+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:41.991045+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 393216 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:42.991224+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 393216 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:43.991393+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:44.991561+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:45.991736+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:46.991874+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:47.992015+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:48.992192+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:49.992380+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 368640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:50.992562+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 368640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:51.992709+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 360448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:52.992845+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 360448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:53.993017+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 360448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:54.993198+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70893568 unmapped: 352256 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:55.993341+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70893568 unmapped: 352256 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:56.993508+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 344064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:57.993692+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 344064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:58.993851+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 344064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:34:59.994032+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 335872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:00.994220+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 335872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:01.994436+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 327680 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:02.994600+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 327680 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:03.994733+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 327680 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:04.994894+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 319488 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:05.995031+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 319488 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:06.995233+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 319488 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:07.995374+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 311296 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:08.995627+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 311296 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:09.995797+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 303104 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:10.995957+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 303104 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:11.996108+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 303104 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:12.996254+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 294912 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:13.996414+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 294912 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:14.996574+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 294912 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:15.996747+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 286720 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:16.996923+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 286720 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:17.997048+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 278528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:18.997189+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 278528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:19.997421+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 278528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:20.997595+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 278528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:21.997925+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 278528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:22.998198+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 278528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:23.998489+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 270336 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:24.998738+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 270336 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:25.998935+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 262144 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:26.999129+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 262144 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:27.999286+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 262144 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:28.999484+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 253952 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:29.999625+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 253952 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:30.999789+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 245760 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:32.000069+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 245760 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:33.000477+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 237568 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:34.000723+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 237568 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:35.000956+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 237568 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:36.001143+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 229376 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:37.001367+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 229376 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:38.001488+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 229376 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:39.001636+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 221184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:40.001806+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 221184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:41.001970+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 221184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:42.002095+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 212992 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:43.002273+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 212992 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:44.002373+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 204800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:45.002521+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 204800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:46.002622+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 196608 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:47.002794+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 196608 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:48.002971+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 196608 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:49.003102+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 188416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:50.003234+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 188416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:51.003401+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 172032 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:52.003556+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 172032 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:53.003757+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 163840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:54.005700+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 163840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:55.005883+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 163840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:56.006015+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 163840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:57.006163+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 163840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:58.006390+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 155648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:35:59.006607+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 155648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:00.006805+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 147456 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:01.007102+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 139264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:02.007243+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 139264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:03.007385+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 131072 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:04.007538+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 131072 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:05.007670+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 122880 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:06.007793+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 122880 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:07.007945+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 114688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:08.008097+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 114688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:09.008255+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 114688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:10.008419+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 106496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:11.008536+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 106496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:12.008759+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 98304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:13.009109+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 98304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:14.009290+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 98304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:15.009507+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 90112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:16.009668+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 90112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:17.009819+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 90112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:18.009979+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 81920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:19.010133+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 81920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:20.010295+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5531 writes, 23K keys, 5531 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5531 writes, 838 syncs, 6.60 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5531 writes, 23K keys, 5531 commit groups, 1.0 writes per commit group, ingest: 18.48 MB, 0.03 MB/s
                                           Interval WAL: 5531 writes, 838 syncs, 6.60 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 24576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:21.010458+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:22.010655+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:23.010849+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 8192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:24.011102+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 8192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:25.011286+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 8192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:26.011412+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 0 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:27.011538+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 0 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:28.011660+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 0 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:29.011769+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1040384 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:30.011888+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1040384 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:31.011989+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1040384 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:32.012115+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1032192 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:33.012243+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:34.012379+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1032192 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:35.012513+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:36.012638+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:37.012801+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:38.012914+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:39.018382+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:40.018524+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 1007616 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:41.018624+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 1007616 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:42.018765+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 1007616 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:43.018918+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 999424 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:44.019049+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 999424 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:45.019251+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 991232 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:46.019415+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 991232 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:47.019575+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 991232 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:48.019744+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:49.019951+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:50.020114+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 974848 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:51.020262+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 974848 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:52.020458+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:53.020605+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:54.020763+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:55.021058+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 950272 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:56.021231+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 950272 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:57.021404+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 950272 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:58.021532+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 942080 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:36:59.021771+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 942080 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:00.021961+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 942080 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:01.022728+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 933888 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:02.022893+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 933888 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:03.023062+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 933888 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:04.023252+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:05.023450+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:06.023646+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:07.023807+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 917504 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:08.023925+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 917504 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:09.024094+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 909312 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:10.024276+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 909312 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:11.024491+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 909312 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:12.024668+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:13.024825+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:14.024973+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:15.025190+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:16.025402+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:17.025553+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:18.025704+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 884736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:19.025844+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 884736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:20.026004+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:21.026148+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:22.026269+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:23.026456+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:24.026594+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:25.026764+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:26.027073+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 318.438812256s of 318.458435059s, submitted: 6
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:27.027250+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 1916928 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:28.027455+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 720896 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:29.027724+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 720896 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:30.027881+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 720896 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:31.028024+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 720896 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:32.028238+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 720896 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:33.029082+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 720896 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:34.029241+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 720896 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:35.029441+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 720896 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:36.029620+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 720896 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:37.030562+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 720896 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:38.030705+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 712704 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:39.030918+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 712704 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:40.031124+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 704512 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:41.031294+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 704512 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:42.031465+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 704512 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:43.031605+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 696320 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:44.031813+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 696320 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:45.032024+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 688128 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:46.032179+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 688128 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:47.032307+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 688128 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:48.032369+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 679936 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:49.032554+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 679936 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:50.032678+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 671744 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:51.032899+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 671744 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:52.033012+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 655360 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:53.033154+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 655360 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:54.033304+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 655360 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:55.033499+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 647168 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:56.033615+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 647168 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:57.033718+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 638976 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:58.033903+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 638976 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:37:59.034068+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 638976 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:00.034295+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 630784 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:01.034479+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 630784 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:02.034685+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 614400 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:03.034886+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 614400 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:04.035126+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 606208 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:05.035323+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 606208 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:06.035522+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 606208 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:07.035652+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 598016 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:08.035763+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 598016 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:09.035908+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 589824 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:10.036035+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 589824 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:11.036195+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 589824 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:12.036398+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 581632 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:13.036567+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 581632 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:14.036697+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 573440 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:15.036889+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 573440 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:16.037058+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 573440 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:17.037225+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 565248 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:18.037416+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 565248 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:19.037594+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 557056 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:20.037738+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 557056 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:21.037840+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:22.037993+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:23.038130+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:24.038275+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:25.038417+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:26.038607+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:27.038767+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:28.038923+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:29.039043+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:30.039202+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:31.039565+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:32.039725+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:33.039898+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:34.040061+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:35.040238+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:36.040387+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 548864 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:37.040516+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 540672 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:38.040698+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 540672 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:39.040858+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 540672 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:40.040989+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 540672 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:41.041118+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 540672 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:42.041272+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 524288 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:43.041402+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 524288 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:44.041512+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 524288 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:45.041730+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 516096 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:46.041893+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 516096 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:47.042041+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 516096 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:48.042212+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 516096 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:49.042420+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 516096 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:50.042596+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 516096 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:51.042799+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 516096 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:52.042975+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 507904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:53.043141+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 507904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:54.043268+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 507904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:55.043383+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 507904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:56.043497+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 507904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:57.043648+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 507904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:58.043775+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 507904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:38:59.043922+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 507904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:00.044044+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 507904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:01.044182+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 507904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:02.044368+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 491520 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:03.044505+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 491520 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:04.044711+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 491520 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:05.044987+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 491520 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:06.045136+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 491520 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:07.045282+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 483328 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:08.045464+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 483328 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:09.045629+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 483328 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:10.045789+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 483328 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:11.045967+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 483328 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:12.046496+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 475136 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:13.046643+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 475136 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:14.046773+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 475136 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:15.046943+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 475136 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:16.047063+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 475136 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:17.047217+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 475136 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:18.047617+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 475136 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:19.047737+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 475136 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:20.047887+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 475136 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:21.048050+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:22.048199+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:23.048357+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:24.048462+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:25.048659+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:26.048776+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:27.048903+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:28.049049+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:29.049292+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:30.049473+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:31.049678+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:32.049793+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:33.050002+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 442368 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:34.050213+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 442368 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:35.050418+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 442368 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:36.050555+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 442368 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:37.050871+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 442368 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:38.051166+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 442368 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:39.051308+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 442368 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:40.051496+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 442368 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:41.051666+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:42.051785+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:43.052048+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:44.052213+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:45.052434+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:46.052600+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:47.052749+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:48.052866+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:49.052988+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:50.053423+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:51.053621+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:52.053820+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:53.054034+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:54.054247+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:55.054435+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:56.054706+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:57.055047+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:58.055207+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:39:59.055405+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:00.055603+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:01.055761+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:02.056003+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:03.056177+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:04.056387+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:05.056592+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:06.056721+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 401408 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:07.056857+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 401408 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:08.057023+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 401408 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:09.057181+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 401408 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:10.057385+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 401408 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:11.057556+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:12.057763+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:13.057974+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:14.058098+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:15.058290+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:16.058416+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:17.058539+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:18.058686+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:19.058830+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:20.058991+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:21.059159+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:22.059314+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:23.059480+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:24.059635+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:25.059820+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:26.059949+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:27.060096+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:28.060243+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:29.060395+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:30.060518+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:31.060660+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:32.060801+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:33.060936+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:34.061078+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:35.061285+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:36.061427+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:37.061605+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:38.061744+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:39.061919+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:40.062074+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:41.062242+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 352256 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:42.062552+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 352256 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:43.062870+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 352256 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:44.063144+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 352256 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:45.063401+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 352256 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:46.063551+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 434176 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:47.063684+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 434176 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:48.063803+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 434176 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:49.063935+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 434176 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:50.064045+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 434176 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:51.064205+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:52.064462+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:53.064679+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:54.064960+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:55.065156+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:56.065483+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:57.065689+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:58.065822+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:40:59.066028+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:00.066250+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:01.066544+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 425984 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:02.067064+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:03.067355+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:04.067479+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:05.067779+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:06.067956+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:07.068152+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:08.068424+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:09.068613+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:10.068789+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:11.068957+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:12.069166+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:13.069427+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:14.069627+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:15.069999+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:16.070134+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:17.070275+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:18.070394+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:19.070645+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:20.070947+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:21.071235+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 409600 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:22.071480+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:23.071664+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:24.071843+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:25.072085+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:26.072232+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: mgrc ms_handle_reset ms_handle_reset con 0x55660f1bfc00
Oct 11 05:00:58 compute-0 ceph-osd[87458]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1439243141
Oct 11 05:00:58 compute-0 ceph-osd[87458]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1439243141,v1:192.168.122.100:6801/1439243141]
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: get_auth_request con 0x5566106b7c00 auth_method 0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: mgrc handle_mgr_configure stats_period=5
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:27.072396+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:28.072537+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:29.072660+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:30.072801+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:31.072953+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:32.073085+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 ms_handle_reset con 0x55660fbb4000 session 0x556610b88000
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55661220f000
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:33.073241+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:34.073437+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:35.073624+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:36.073812+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:37.073964+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:38.074092+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:39.074239+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:40.074419+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:41.074549+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:42.074735+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:43.074934+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:44.075097+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:45.075281+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:46.075389+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:47.075486+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:48.075603+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:49.075740+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:50.075917+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:51.076087+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:52.076278+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:53.076434+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:54.076557+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:55.076724+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:56.076862+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:57.077038+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:58.077247+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:41:59.077402+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:00.077686+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:01.077895+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:02.078067+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:03.078244+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:04.078390+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:05.078553+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:06.078765+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:07.079075+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:08.079238+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:09.079683+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:10.079984+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:11.080180+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:12.080363+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:13.080594+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:14.080746+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:15.081063+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:16.081213+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:17.081412+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:18.081583+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:19.081893+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:20.082225+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:21.082513+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:22.082714+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:23.082970+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:24.083185+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:25.083390+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:26.083616+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:27.083774+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:28.083942+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:29.084126+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:30.084310+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:31.084534+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:32.084689+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:33.084855+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:34.084985+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:35.085167+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:36.085282+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:37.085500+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:38.085637+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:39.085787+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:40.085952+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:41.086073+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:42.086218+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:43.086440+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:44.086552+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:45.086693+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:46.086894+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:47.087008+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:48.087126+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:49.087398+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:50.087575+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:51.087766+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:52.087917+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:53.088085+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:54.088218+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:55.088417+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:56.088603+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:57.088760+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:58.088889+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:42:59.089041+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:00.089170+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:01.089310+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:02.089478+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:03.089679+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:04.089862+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:05.090043+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:06.090208+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:07.090415+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:08.090593+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:09.090752+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:10.090942+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:11.091101+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:12.091258+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:13.091473+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:14.091595+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:15.091744+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:16.091896+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:17.092039+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:18.092182+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:19.092366+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:20.092503+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:21.092734+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:22.092865+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:23.092992+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:24.093135+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:25.093398+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:26.093587+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:27.093732+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:28.093852+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:29.094035+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:30.094156+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:31.094280+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:32.094424+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:33.094582+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:34.094727+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:35.094914+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:36.095002+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:37.095165+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:38.095324+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:39.095550+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:40.095710+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:41.095878+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:42.096036+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:43.096255+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:44.096419+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:45.096655+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:46.096825+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:47.097022+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:48.097171+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:49.097471+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:50.097614+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:51.097780+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:52.097921+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:53.098079+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:54.098242+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:55.098435+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:56.098589+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:57.098725+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:58.098878+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:43:59.099021+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:00.099168+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:01.099322+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:02.099499+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:03.099687+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:04.099876+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:05.100069+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:06.100220+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:07.100430+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:08.100570+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:09.100764+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:10.100901+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:11.101078+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:12.101245+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:13.101394+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:14.101552+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:15.101712+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:16.101832+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:17.101977+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:18.102108+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:19.102241+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:20.102396+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:21.102595+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:22.102752+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:23.102910+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:24.103018+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:25.103228+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:26.103406+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:27.103555+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:28.103796+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:29.104007+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:30.104182+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:31.104415+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:32.104535+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:33.104700+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:34.104892+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:35.105092+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:36.105258+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:37.105423+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:38.105603+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:39.105756+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:40.105871+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:41.106001+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 8192 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:42.106162+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 8192 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:43.106302+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 8192 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:44.106451+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 8192 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:45.106614+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 8192 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:46.106746+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:47.106917+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:48.107062+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:49.107229+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:50.107392+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:51.107610+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:52.107729+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:53.107876+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:54.108045+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:55.108170+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:56.108311+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:57.108523+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:58.108657+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:44:59.108791+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:00.108938+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:01.109090+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:02.109217+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:03.109397+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:04.109521+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:05.109689+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:06.109835+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:07.109967+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:08.110123+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:09.110293+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:10.110410+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:11.110535+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:12.110736+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:13.111407+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:14.111515+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:15.112456+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:16.112694+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:17.112857+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:18.113558+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:19.114092+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:20.114202+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:21.114314+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:22.114453+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:23.114604+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:24.114775+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:25.115253+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:26.115389+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:27.115615+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:28.115819+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:29.115957+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:30.116104+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:31.116414+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:32.116547+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:33.116687+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:34.116866+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:35.117132+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:36.117301+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:37.117481+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:38.117722+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:39.117893+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:40.118042+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:41.118180+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:42.118294+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:43.118381+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:44.118750+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:45.119365+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:46.119522+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:47.119719+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:48.120101+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:49.120233+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:50.120491+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:51.120683+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:52.120899+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:53.121232+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:54.121566+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:55.121745+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:56.121970+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:57.122157+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:58.122380+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:45:59.122636+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:00.122848+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:01.123014+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:02.123262+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:03.123507+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:04.123671+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:05.123876+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:06.124067+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:07.124280+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:08.124489+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:09.124725+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:10.124891+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:11.125067+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:12.125245+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:13.125408+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:14.125543+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:15.125795+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:16.125954+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:17.126125+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:18.126273+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:19.126535+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:20.126682+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5743 writes, 24K keys, 5743 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5743 writes, 944 syncs, 6.08 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55660e36f4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:21.126816+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:22.126968+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:23.127167+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:24.127365+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:25.127634+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:26.127823+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:27.128006+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:28.128162+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:29.128378+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:30.128553+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:31.128737+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:32.128950+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:33.129129+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:34.129307+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:35.129542+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:36.129688+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:37.129821+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:38.129980+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:39.130131+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:40.130234+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:41.130379+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:42.130510+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:43.130731+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:44.130905+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:45.131187+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:46.131400+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:47.131596+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:48.131717+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:49.131897+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:50.132066+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:51.132174+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:52.132364+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:53.132526+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:54.132664+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:55.133310+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:56.133461+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:57.133663+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:58.133796+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:46:59.133969+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:00.134148+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:01.134312+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:02.134537+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:03.134712+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:04.134868+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:05.135084+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:06.135246+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:07.135409+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:08.135551+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:09.135753+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:10.135919+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:11.136096+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:12.136270+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:13.136429+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:14.136577+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:15.136744+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:16.136934+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:17.137136+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:18.137293+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:19.137521+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:20.137722+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:21.137915+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:22.138065+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:23.138217+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:24.138369+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:25.138641+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 950272 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:26.138812+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 599.728332520s of 600.150024414s, submitted: 106
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 917504 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:27.139036+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:28.139216+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1024000 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:29.139464+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1024000 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:30.139626+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1024000 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:31.139816+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1024000 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:32.139962+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:33.140167+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:34.140367+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:35.140553+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:36.140733+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:37.140915+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:38.141103+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:39.141290+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:40.141534+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:41.141731+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:42.141902+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:43.142033+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:44.142191+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:45.142463+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:46.142645+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:47.142780+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:48.142957+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:49.143131+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:50.143298+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:51.143453+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:52.143610+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:53.143803+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:54.144067+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:55.144285+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:56.144542+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:57.144875+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:58.145042+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:47:59.145167+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:00.145310+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:01.145543+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:02.145667+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:03.145877+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:04.146011+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:05.146158+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:06.146323+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:07.146538+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:08.146755+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:09.146949+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:10.147087+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:11.147220+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:12.147407+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:13.147700+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:14.147897+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:15.148098+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:16.148261+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:17.148458+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:18.148675+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:19.148859+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:20.149034+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:21.149219+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:22.149433+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:23.149569+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:24.149711+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:25.149912+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:26.150046+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:27.150209+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:28.150344+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:29.150548+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:30.150711+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:31.150876+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:32.151032+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:33.151187+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:34.151370+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:35.151534+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:36.151629+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:37.151777+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:38.151932+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:39.152159+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:40.152287+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:41.152482+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:42.152677+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:43.152819+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:44.152970+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:45.153148+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:46.153324+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:47.153517+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:48.153657+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:49.153837+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:50.153986+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:51.154144+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:52.154451+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:53.154649+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:54.154827+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:55.155009+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:56.155138+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:57.155749+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:58.156687+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:48:59.157015+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:00.157389+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:01.158929+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:02.159436+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:03.159716+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:04.160452+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:05.163136+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:06.163315+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:07.163765+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:08.163903+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:09.164054+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:10.164499+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:58 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:58 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:11.164686+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:12.164836+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:13.165066+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:14.165246+0000)
Oct 11 05:00:58 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:58 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:58 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:15.165476+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:16.165668+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:17.165823+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:18.165996+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:19.166237+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:20.166450+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:21.166798+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:22.166957+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:23.167175+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:24.167377+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:25.167777+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:26.167957+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:27.168108+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:28.168428+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:29.168589+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:30.168812+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:31.169030+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:32.169213+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:33.169435+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:34.169579+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:35.169787+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:36.169941+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:37.170058+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:38.170256+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:39.170432+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:40.170611+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:41.170786+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:42.170929+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:43.171098+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:44.171269+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:45.171529+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:46.171720+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:47.171872+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:48.172035+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:49.172231+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:50.172433+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:51.172663+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:52.172850+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:53.173044+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:54.173244+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:55.173642+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:56.173841+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:57.174060+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:58.174250+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:49:59.174460+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:00.174620+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:01.174807+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:02.175603+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:03.176722+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:04.177416+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:05.177970+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:06.178712+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:07.179409+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:08.180071+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:09.180724+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:10.181226+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:11.181622+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:12.182009+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:13.182431+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:14.182815+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:15.183194+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:16.183344+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:17.183630+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:18.183824+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:19.184108+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:20.184401+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:21.184620+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:22.184809+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:23.185022+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:24.185241+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:25.185489+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:26.185706+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 835584 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:27.185919+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:28.186156+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:29.186390+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:30.186597+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:31.186829+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:32.187035+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:33.187230+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:34.187392+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:35.187618+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:36.187825+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:37.188010+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:38.188180+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:39.188402+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:40.188600+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:41.188838+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:42.189013+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:43.189196+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:44.189411+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:45.189676+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:46.189828+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:47.190015+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:48.190189+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:49.190426+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:50.190615+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:51.190751+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:52.190905+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:53.191110+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:54.191290+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:55.191534+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:56.191662+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:57.191827+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:58.191995+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:50:59.192234+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:00.192394+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:01.192627+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:02.192904+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:03.193136+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:04.193317+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:05.193588+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:06.193750+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:07.193948+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:08.194129+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:09.194448+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:10.194720+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:11.194975+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:12.195169+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:13.195412+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:14.195584+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:15.195824+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:16.196011+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:17.196203+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:18.196451+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:19.196683+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:20.196882+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:21.197056+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:22.197253+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xacb1f/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:23.197436+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:24.197579+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:25.197806+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852101 data_alloc: 218103808 data_used: 118784
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:26.197924+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 239.658248901s of 240.028640747s, submitted: 106
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac0000/0x0/0x4ffc00000, data 0xae69c/0x15d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 729088 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac0000/0x0/0x4ffc00000, data 0xae69c/0x15d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566127b5400
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:27.198081+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 589824 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 117 ms_handle_reset con 0x5566127b5400 session 0x55661291a3c0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:28.198218+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd70800
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1490944 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:29.198404+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 18112512 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 118 ms_handle_reset con 0x55660fd70800 session 0x55661291a960
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:30.198599+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 18096128 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929021 data_alloc: 218103808 data_used: 135168
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:31.198817+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 18096128 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fc2b2000/0x0/0x4ffc00000, data 0x8b3a3b/0x96b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:32.198986+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 18079744 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:33.199185+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 18079744 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:34.199393+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 18079744 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:35.199657+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 18079744 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929181 data_alloc: 218103808 data_used: 139264
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:36.199857+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fc2b2000/0x0/0x4ffc00000, data 0x8b3a3b/0x96b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.908299446s of 10.049924850s, submitted: 37
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 18071552 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:37.200067+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 18071552 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:38.200319+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 18071552 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:39.200559+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 119 heartbeat osd_stat(store_statfs(0x4fc2af000/0x0/0x4ffc00000, data 0x8b549e/0x96e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 18071552 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:40.200750+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 18071552 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931979 data_alloc: 218103808 data_used: 147456
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:41.200897+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 18071552 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:42.201002+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 18071552 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:43.201191+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 18071552 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 119 heartbeat osd_stat(store_statfs(0x4fc2af000/0x0/0x4ffc00000, data 0x8b549e/0x96e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:44.201394+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 18071552 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:45.201628+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 18071552 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:46.201792+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931979 data_alloc: 218103808 data_used: 147456
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 18055168 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 119 heartbeat osd_stat(store_statfs(0x4fc2af000/0x0/0x4ffc00000, data 0x8b549e/0x96e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:47.201963+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 18055168 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:48.202112+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 18055168 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:49.202268+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 18055168 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:50.202412+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 18055168 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:51.202579+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931979 data_alloc: 218103808 data_used: 147456
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 119 heartbeat osd_stat(store_statfs(0x4fc2af000/0x0/0x4ffc00000, data 0x8b549e/0x96e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 18055168 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:52.202710+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 18046976 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:53.202877+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 18046976 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:54.203064+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 18046976 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:55.203266+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 119 heartbeat osd_stat(store_statfs(0x4fc2af000/0x0/0x4ffc00000, data 0x8b549e/0x96e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 18046976 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:56.203418+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931979 data_alloc: 218103808 data_used: 147456
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 18046976 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:57.203604+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 18046976 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:58.203756+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 119 heartbeat osd_stat(store_statfs(0x4fc2af000/0x0/0x4ffc00000, data 0x8b549e/0x96e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 18063360 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:51:59.203949+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 18063360 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:00.204073+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 18063360 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:01.204268+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931979 data_alloc: 218103808 data_used: 147456
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 18063360 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:02.204422+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 18063360 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:03.204510+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 119 heartbeat osd_stat(store_statfs(0x4fc2af000/0x0/0x4ffc00000, data 0x8b549e/0x96e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 18063360 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:04.204668+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55661296e000
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 119 ms_handle_reset con 0x55661296e000 session 0x55661291b680
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 18038784 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd70800
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.743133545s of 28.755020142s, submitted: 9
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:05.204854+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 120 ms_handle_reset con 0x55660fd70800 session 0x55661291b860
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 18030592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd71400
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:06.205017+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942124 data_alloc: 218103808 data_used: 147456
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 122 ms_handle_reset con 0x55660fd71400 session 0x55661291ba40
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 17940480 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:07.205168+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 17940480 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fc2a3000/0x0/0x4ffc00000, data 0x8bac96/0x978000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:08.205278+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5c00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 123 ms_handle_reset con 0x5566129a5c00 session 0x5566129c8b40
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 17907712 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:09.205451+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 17899520 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:10.205569+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 17899520 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:11.205706+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956266 data_alloc: 218103808 data_used: 151552
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 17899520 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc299000/0x0/0x4ffc00000, data 0x8bff1d/0x982000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:12.205966+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 17899520 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5800
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:13.206187+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5400
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 17784832 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 126 ms_handle_reset con 0x5566129a5800 session 0x5566129c8f00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 126 ms_handle_reset con 0x5566129a5400 session 0x556612988780
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:14.206435+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd70800
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 17752064 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:15.206636+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.967695236s of 10.158306122s, submitted: 44
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 127 ms_handle_reset con 0x55660fd70800 session 0x5566129c94a0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd71400
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 17711104 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5800
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fc290000/0x0/0x4ffc00000, data 0x8c3b28/0x98c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 128 ms_handle_reset con 0x55660fd71400 session 0x5566129c9a40
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:16.206777+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976629 data_alloc: 218103808 data_used: 159744
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 128 ms_handle_reset con 0x5566129a5800 session 0x556612988f00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5c00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 17661952 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5000
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 128 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4c00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 129 ms_handle_reset con 0x5566129a4c00 session 0x55660f8e6d20
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:17.207020+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55661220f400
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 129 ms_handle_reset con 0x55661220f400 session 0x55660fec3a40
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 129 ms_handle_reset con 0x5566129a5c00 session 0x556612988780
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 91586560 unmapped: 9035776 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd70800
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:18.207205+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 130 ms_handle_reset con 0x55660fd70800 session 0x55660fec3680
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 24666112 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd71400
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:19.207423+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 131 ms_handle_reset con 0x5566129a5000 session 0x5566129894a0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 131 ms_handle_reset con 0x55660fd71400 session 0x556612822b40
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 24649728 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:20.207797+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4c00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5800
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 24649728 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:21.207946+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264389 data_alloc: 218103808 data_used: 172032
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9a82000/0x0/0x4ffc00000, data 0x30cbf8d/0x319b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 133 ms_handle_reset con 0x5566129a5800 session 0x556612989a40
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 133 ms_handle_reset con 0x5566129a4c00 session 0x5566128230e0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 23617536 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd70800
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd71400
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:22.208113+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 134 ms_handle_reset con 0x55660fd70800 session 0x5566128234a0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 23576576 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:23.208488+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 135 ms_handle_reset con 0x55660fd71400 session 0x556611d4cd20
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc27e000/0x0/0x4ffc00000, data 0x8cf326/0x99c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 23486464 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:24.208641+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 23486464 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:25.208906+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc27e000/0x0/0x4ffc00000, data 0x8d0f14/0x99e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.198155403s of 10.192216873s, submitted: 203
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 23470080 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:26.209066+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5000
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001658 data_alloc: 218103808 data_used: 180224
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 137 ms_handle_reset con 0x5566129a5000 session 0x5566129c94a0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 23470080 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:27.209231+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 23470080 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:28.209443+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 23470080 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:29.209594+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc278000/0x0/0x4ffc00000, data 0x8d45bc/0x9a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5c00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 22388736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 138 ms_handle_reset con 0x5566129a5c00 session 0x55661291a960
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:30.209785+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5c00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 22388736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:31.209941+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017062 data_alloc: 218103808 data_used: 188416
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 140 ms_handle_reset con 0x5566129a5c00 session 0x55661291b680
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd70800
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 140 ms_handle_reset con 0x55660fd70800 session 0x556612989a40
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd71400
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 22323200 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:32.210078+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 22290432 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4c00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:33.210211+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 142 ms_handle_reset con 0x5566129a4c00 session 0x556610b88000
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 142 ms_handle_reset con 0x55660fd71400 session 0x5566129885a0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5000
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 142 ms_handle_reset con 0x5566129a5000 session 0x556612989680
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 22274048 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:34.210401+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc267000/0x0/0x4ffc00000, data 0x8dd011/0x9b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 22274048 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:35.210578+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc267000/0x0/0x4ffc00000, data 0x8dd011/0x9b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 22274048 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:36.210761+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023257 data_alloc: 218103808 data_used: 192512
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 22265856 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:37.210908+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 22265856 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:38.211035+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 22265856 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:39.211197+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc267000/0x0/0x4ffc00000, data 0x8dd011/0x9b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 22265856 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.400314331s of 14.649963379s, submitted: 66
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:40.212387+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc265000/0x0/0x4ffc00000, data 0x8deaac/0x9b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 22265856 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:41.212516+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025367 data_alloc: 218103808 data_used: 192512
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 22347776 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5000
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 143 ms_handle_reset con 0x5566129a5000 session 0x55660fec2f00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:42.212703+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd70800
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 143 ms_handle_reset con 0x55660fd70800 session 0x55660fec30e0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 22355968 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:43.212872+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd71400
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 143 ms_handle_reset con 0x55660fd71400 session 0x556611d4cd20
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4c00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 22331392 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:44.213027+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 22323200 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:45.213190+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 145 ms_handle_reset con 0x5566129a4c00 session 0x556612822b40
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc262000/0x0/0x4ffc00000, data 0x8e0629/0x9bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 22298624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:46.213383+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035270 data_alloc: 218103808 data_used: 192512
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 22298624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:47.213548+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 22298624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:48.213705+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 22298624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:49.213872+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5c00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 22323200 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc25d000/0x0/0x4ffc00000, data 0x8e221d/0x9bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.932152748s of 10.016571999s, submitted: 44
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55661220ec00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:50.214007+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 146 ms_handle_reset con 0x5566129a5c00 session 0x5566128234a0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 22323200 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc25d000/0x0/0x4ffc00000, data 0x8e221d/0x9bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 147 ms_handle_reset con 0x55661220ec00 session 0x55660f8e72c0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:51.214143+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042827 data_alloc: 218103808 data_used: 200704
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 22315008 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd70800
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:52.214296+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 147 ms_handle_reset con 0x55660fd70800 session 0x5566128a0f00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 22519808 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd71400
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 147 ms_handle_reset con 0x55660fd71400 session 0x5566128a14a0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4c00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:53.214437+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 22478848 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fc258000/0x0/0x4ffc00000, data 0x8e593f/0x9c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 148 ms_handle_reset con 0x5566129a4c00 session 0x5566128a1680
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:54.214550+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 22470656 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:55.214809+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5000
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 22470656 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:56.214914+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 149 ms_handle_reset con 0x5566129a5000 session 0x5566128a1a40
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046958 data_alloc: 218103808 data_used: 204800
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 22437888 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:57.215075+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 22437888 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:58.215204+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 22536192 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:52:59.215428+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc24f000/0x0/0x4ffc00000, data 0x8eacb2/0x9cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 22503424 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:00.215595+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.250770569s of 10.486794472s, submitted: 80
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 22503424 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:01.215737+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053066 data_alloc: 218103808 data_used: 208896
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 22503424 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fc24d000/0x0/0x4ffc00000, data 0x8ec74d/0x9d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:02.215897+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd70800
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 151 ms_handle_reset con 0x55660fd70800 session 0x55661251f2c0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd71400
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 151 ms_handle_reset con 0x55660fd71400 session 0x55661251f860
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55661220ec00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 151 ms_handle_reset con 0x55661220ec00 session 0x55661251fe00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4c00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 151 ms_handle_reset con 0x5566129a4c00 session 0x5566128a0f00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566127b5400
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 151 ms_handle_reset con 0x5566127b5400 session 0x5566128a1a40
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 22544384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:03.216006+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566127b5400
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 151 ms_handle_reset con 0x5566127b5400 session 0x5566128234a0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd70800
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 151 ms_handle_reset con 0x55660fd70800 session 0x556612823c20
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 22544384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x55660fd71400
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 151 ms_handle_reset con 0x55660fd71400 session 0x55660f8e6000
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:04.216192+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5800
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 151 ms_handle_reset con 0x5566129a5800 session 0x55660f94c780
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5000
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 22528000 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:05.216424+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 22528000 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:06.216567+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1055802 data_alloc: 218103808 data_used: 212992
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 22528000 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:07.216717+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 22528000 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fc24d000/0x0/0x4ffc00000, data 0x8ec770/0x9d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:08.216886+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5c00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 151 ms_handle_reset con 0x5566129a5c00 session 0x55660ff910e0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5c00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 152 ms_handle_reset con 0x5566129a5c00 session 0x55661251ef00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5800
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4c00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 152 ms_handle_reset con 0x5566129a4c00 session 0x55660fec3e00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 152 ms_handle_reset con 0x5566129a5800 session 0x556612823c20
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4000
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 152 ms_handle_reset con 0x5566129a4000 session 0x55661251f860
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566127b5400
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 22405120 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:09.217036+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 152 handle_osd_map epochs [153,153], i have 153, src has [1,153]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 153 ms_handle_reset con 0x5566127b5400 session 0x556610a523c0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566127b5400
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 22396928 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:10.217156+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 154 ms_handle_reset con 0x5566127b5400 session 0x556610a532c0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fc23e000/0x0/0x4ffc00000, data 0x8f1aeb/0x9dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22372352 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:11.217285+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072695 data_alloc: 218103808 data_used: 225280
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4000
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 154 ms_handle_reset con 0x5566129a4000 session 0x556610a53680
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4c00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 154 ms_handle_reset con 0x5566129a4c00 session 0x556610a53c20
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5800
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 154 ms_handle_reset con 0x5566129a5800 session 0x556610a53e00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 22364160 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:12.217455+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5c00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.109927177s of 12.313597679s, submitted: 80
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 154 ms_handle_reset con 0x5566129a5c00 session 0x55661279a1e0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 22364160 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:13.217616+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566127b5400
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 22331392 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 155 ms_handle_reset con 0x5566127b5400 session 0x55661279a5a0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:14.217772+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 22315008 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:15.217998+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 156 ms_handle_reset con 0x5566129a5000 session 0x556611d4cd20
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4000
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 22282240 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:16.218237+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 157 ms_handle_reset con 0x5566129a4000 session 0x556610a554a0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077787 data_alloc: 218103808 data_used: 241664
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fc23c000/0x0/0x4ffc00000, data 0x8f50c3/0x9e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 22282240 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:17.218650+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 22282240 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:18.219040+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 22282240 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:19.219300+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 22282240 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:20.219420+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4c00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 158 ms_handle_reset con 0x5566129a4c00 session 0x55660fec3e00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 22282240 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:21.219656+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fc238000/0x0/0x4ffc00000, data 0x8f86b1/0x9e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079041 data_alloc: 218103808 data_used: 237568
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fc238000/0x0/0x4ffc00000, data 0x8f86b1/0x9e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78372864 unmapped: 22249472 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:22.219834+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5800
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 158 ms_handle_reset con 0x5566129a5800 session 0x55661251fe00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a5800
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 22241280 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:23.220154+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _renew_subs
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.601550102s of 10.930751801s, submitted: 156
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 159 ms_handle_reset con 0x5566129a5800 session 0x5566128234a0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 22241280 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:24.220350+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 22241280 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:25.220766+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 22241280 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:26.220919+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082413 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc237000/0x0/0x4ffc00000, data 0x8fa240/0x9e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 22241280 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:27.221165+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 22241280 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:28.221345+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 22241280 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:29.221462+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc237000/0x0/0x4ffc00000, data 0x8fa240/0x9e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:30.221605+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:31.221746+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:32.222045+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:33.222412+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:34.222617+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:35.222821+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:36.222955+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:37.223223+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:38.223438+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:39.223648+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:40.223858+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:41.224009+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:42.224165+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:43.224319+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:44.224484+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:45.224654+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:46.224791+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:47.224966+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:48.225135+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:49.225311+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:50.225478+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:51.225626+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:52.225784+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:53.225907+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:54.226065+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:55.226257+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:56.226408+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:57.226595+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:58.226753+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:53:59.226971+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:00.227151+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:01.227254+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:02.227416+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:03.227717+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:04.227928+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:05.228168+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:06.228534+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:07.228768+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:08.228896+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:09.229072+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:10.229235+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:11.229522+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:12.229746+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:13.229955+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:14.230093+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:15.230279+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:16.230476+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:17.230623+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:18.230861+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:19.231065+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:20.232532+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:21.232697+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:22.233183+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:23.233741+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:24.234721+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:25.234899+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:26.235036+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:27.235179+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:28.256005+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:29.256307+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:30.257157+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:31.257291+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:32.257396+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:33.257521+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:34.257653+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:35.257795+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:36.257897+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:37.258053+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:38.258172+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:39.258571+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:40.258701+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:41.258878+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 21192704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:42.259696+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 21184512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:43.259807+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 21045248 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'config diff' '{prefix=config diff}'
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'config show' '{prefix=config show}'
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:44.259923+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'counter dump' '{prefix=counter dump}'
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'counter schema' '{prefix=counter schema}'
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 20684800 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:45.260065+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 20512768 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:46.260177+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 20430848 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'log dump' '{prefix=log dump}'
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'perf dump' '{prefix=perf dump}'
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:47.260288+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'perf schema' '{prefix=perf schema}'
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 20627456 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:48.260403+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:49.260544+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:50.260657+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:51.260768+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:52.260878+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:53.261002+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:54.261125+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:55.261302+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:56.261401+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:57.261563+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:58.261696+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:54:59.261833+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:00.261936+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:01.262096+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:02.262243+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:03.262406+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:04.262562+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:05.262695+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:06.262826+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:07.262966+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:08.263091+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:09.263290+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:10.263461+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:11.263628+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:12.263817+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:13.263934+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:14.264100+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:15.264267+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:16.264410+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:17.264562+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:18.264727+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:19.264910+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:20.265050+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:21.265241+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:22.265414+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:23.265531+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:24.265663+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:25.265859+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:26.266063+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:27.266258+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:28.266393+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:29.266585+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:30.266748+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:31.267001+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:32.267948+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:33.268070+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:34.268304+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:35.268521+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:36.268751+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:37.268988+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:38.269218+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:39.269430+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:40.269569+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:41.269730+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:42.269982+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:43.270102+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:44.270263+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:45.270529+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:46.270727+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:47.270940+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:48.271078+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:49.271250+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:50.271452+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:51.271609+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:52.271826+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:53.272039+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:54.272184+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:55.272407+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:56.272537+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:57.272693+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:58.272843+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:55:59.273015+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:00.273187+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:01.273395+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:02.273647+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:03.273800+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:04.273960+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:05.274178+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:06.274453+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:07.274640+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:08.274834+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:09.275034+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:10.275222+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:11.275408+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:12.275585+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:13.275761+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:14.275916+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:15.276102+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:16.276415+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:17.276593+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:18.276736+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:19.276914+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:20.277089+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7213 writes, 28K keys, 7213 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7213 writes, 1567 syncs, 4.60 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1470 writes, 4015 keys, 1470 commit groups, 1.0 writes per commit group, ingest: 2.26 MB, 0.00 MB/s
                                           Interval WAL: 1470 writes, 623 syncs, 2.36 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:21.277252+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:22.277461+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:23.277617+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:24.277780+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:25.277966+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 20496384 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:26.278125+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: mgrc ms_handle_reset ms_handle_reset con 0x5566106b7c00
Oct 11 05:00:59 compute-0 ceph-osd[87458]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1439243141
Oct 11 05:00:59 compute-0 ceph-osd[87458]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1439243141,v1:192.168.122.100:6801/1439243141]
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: get_auth_request con 0x5566129a5400 auth_method 0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: mgrc handle_mgr_configure stats_period=5
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 20258816 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:27.278296+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 20258816 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:28.278434+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 20258816 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:29.278602+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 20258816 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:30.278777+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 20258816 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:31.278930+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 20258816 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 ms_handle_reset con 0x55661220f000 session 0x55660fd845a0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566127b5400
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:32.279114+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 20258816 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:33.279318+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 20258816 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:34.279591+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 20258816 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:35.279978+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 20258816 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:36.280123+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 20258816 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:37.280306+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 20258816 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:38.280450+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 20258816 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:39.280648+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 20258816 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:40.280834+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 20258816 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:41.281028+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:42.281252+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:43.281441+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:44.281574+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:45.281694+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:46.281908+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:47.282154+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:48.282300+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:49.282447+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:50.282678+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:51.282871+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:52.283023+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:53.283192+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:54.283603+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:55.283867+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:56.284089+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:57.284293+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:58.284459+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:56:59.284603+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:00.284786+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:01.284999+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:02.285190+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:03.285304+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:04.285440+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:05.285595+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:06.285750+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:07.285892+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:08.286019+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:09.286135+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:10.289440+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:11.289650+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:12.289792+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:13.289957+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:14.290114+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:15.290279+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:16.290463+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:17.290605+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:18.290772+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 20250624 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:19.290948+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 ms_handle_reset con 0x5566106b7000 session 0x556610b892c0
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: handle_auth_request added challenge on 0x5566129a4000
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 20242432 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:20.291097+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 20242432 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:21.291266+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 20242432 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:22.291464+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085387 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 20242432 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:23.291622+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 20242432 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:24.291808+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 20242432 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:25.291996+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 20242432 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:26.292122+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 242.487518311s of 242.544052124s, submitted: 22
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 20242432 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:27.292299+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084579 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 20176896 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:28.292454+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 20176896 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:29.292622+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 20176896 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:30.292799+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 20176896 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:31.292970+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:32.293155+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:33.293495+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:34.293715+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:35.293957+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:36.294152+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:37.294386+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:38.294566+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:39.294775+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:40.294966+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2522748930' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:41.295152+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-mon[74243]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-mon[74243]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2727196841' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:42.295373+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:43.295553+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:44.295766+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:45.296002+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:46.296162+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:47.296399+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:48.296591+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:49.296846+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:50.297017+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:51.297185+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:52.297432+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:53.297611+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:54.297779+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:55.297984+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:56.298167+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:57.298377+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:58.298551+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:57:59.298735+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:00.298938+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:01.299100+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:02.299254+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:03.299434+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:04.299606+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:05.299818+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:06.301158+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:07.306017+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:08.307479+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:09.308609+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:10.309758+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:11.311074+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:12.312073+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:13.312640+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:14.313533+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:15.313816+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:16.314487+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:17.314639+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:18.318783+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:19.319198+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:20.319480+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:21.319768+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:22.320290+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:23.320470+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:24.320691+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:25.320855+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:26.321193+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:27.321942+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:28.322280+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:29.322670+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:30.322946+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:31.323159+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 20168704 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:32.323391+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:33.323598+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:34.323773+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:35.323945+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:36.324148+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:37.324437+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:38.324602+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:39.324755+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:40.324918+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:41.325048+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:42.325162+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:43.325321+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:44.325526+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:45.325733+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:46.325904+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:47.326096+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:48.326250+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:49.326424+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:50.326600+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:51.326797+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:52.326975+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:53.327140+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:54.327295+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:55.327510+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:56.327674+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:57.327839+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:58.327941+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:58:59.328082+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:00.328205+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:01.328434+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:02.328613+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:03.328779+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:04.328980+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:05.329177+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:06.329316+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:07.329558+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:08.329718+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:09.329902+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:10.330101+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:11.330619+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:12.331444+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:13.331654+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:14.332249+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:15.332565+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:16.332869+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:17.333159+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:18.333920+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:19.334273+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:20.334978+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:21.335298+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:22.335747+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:23.336133+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:24.336724+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:25.337317+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:26.337691+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:27.337900+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:28.338109+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:29.338368+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:30.338550+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:31.338683+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:32.338823+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:33.338954+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:34.352209+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:35.352396+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:36.352604+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 20160512 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:37.352866+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 20152320 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:38.353070+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 20152320 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:39.353236+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 20152320 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:40.353442+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 20152320 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:41.353667+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 20152320 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:42.354105+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 20152320 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:43.354378+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 20152320 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:44.354536+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 20152320 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:45.355001+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 20152320 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:46.355393+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 20152320 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:47.355645+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 20152320 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:48.355961+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 20152320 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:49.356226+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 20152320 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:50.356471+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 20152320 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:51.356680+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 20152320 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:52.356902+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 20152320 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:53.357085+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 20152320 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:54.357228+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 20152320 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:55.357442+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 20152320 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:56.357632+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 20144128 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:57.357825+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 20144128 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:58.358045+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 20144128 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:59:59.358226+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 20144128 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:00.358456+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 20144128 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:01.358694+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 20144128 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:02.358975+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 20144128 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:03.359114+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 20144128 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:04.359370+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 20144128 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:05.359976+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 20144128 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:06.360110+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 20144128 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:07.360227+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 20144128 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:08.360425+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 20144128 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:09.360600+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 20144128 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:10.360819+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 20144128 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:11.361042+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 20135936 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:12.361299+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 20135936 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:13.361406+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 20135936 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:14.361534+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 20135936 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:15.361681+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 20135936 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:16.361856+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 20135936 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:17.361989+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 20135936 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:18.362090+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 20135936 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:19.362191+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 20135936 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:20.362290+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 20135936 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:21.362431+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 20135936 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:22.362537+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 20135936 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:23.362662+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 20135936 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:24.362773+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 20135936 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:25.362902+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 20135936 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:26.363010+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'config diff' '{prefix=config diff}'
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'config show' '{prefix=config show}'
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'counter dump' '{prefix=counter dump}'
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'counter schema' '{prefix=counter schema}'
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 19685376 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc235000/0x0/0x4ffc00000, data 0x8fbca3/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:27.363129+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 19816448 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 05:00:59 compute-0 ceph-osd[87458]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 05:00:59 compute-0 ceph-osd[87458]: bluestore.MempoolThread(0x55660e44db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084507 data_alloc: 218103808 data_used: 233472
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: tick
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_tickets
Oct 11 05:00:59 compute-0 ceph-osd[87458]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T05:00:28.363236+0000)
Oct 11 05:00:59 compute-0 ceph-osd[87458]: do_command 'log dump' '{prefix=log dump}'
Oct 11 05:00:59 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15183 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:00:59 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct 11 05:00:59 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1704136668' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 11 05:01:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 11 05:01:00 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2656727545' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 11 05:01:00 compute-0 ceph-mon[74243]: pgmap v1202: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:01:00 compute-0 ceph-mon[74243]: from='client.15183 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:01:00 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1704136668' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 11 05:01:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:01:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 11 05:01:00 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3163313698' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 11 05:01:00 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 11 05:01:00 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1504115269' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 11 05:01:00 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:01:01 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2656727545' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 11 05:01:01 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3163313698' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 11 05:01:01 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1504115269' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 11 05:01:01 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15193 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:01:01 compute-0 CROND[288101]: (root) CMD (run-parts /etc/cron.hourly)
Oct 11 05:01:01 compute-0 run-parts[288105]: (/etc/cron.hourly) starting 0anacron
Oct 11 05:01:01 compute-0 run-parts[288112]: (/etc/cron.hourly) finished 0anacron
Oct 11 05:01:01 compute-0 CROND[288099]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 11 05:01:01 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 11 05:01:01 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2868649656' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 11 05:01:01 compute-0 systemd[1]: Starting Hostname Service...
Oct 11 05:01:01 compute-0 systemd[1]: Started Hostname Service.
Oct 11 05:01:02 compute-0 ceph-mon[74243]: pgmap v1203: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:01:02 compute-0 ceph-mon[74243]: from='client.15193 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:01:02 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/2868649656' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 11 05:01:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 11 05:01:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1317421398' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 11 05:01:02 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15199 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:01:02 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 11 05:01:02 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1075725510' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 11 05:01:02 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:01:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:01:03 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4215877554' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:01:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:01:03 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4215877554' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:01:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1317421398' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 11 05:01:03 compute-0 ceph-mon[74243]: from='client.15199 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:01:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/1075725510' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 11 05:01:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/4215877554' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:01:03 compute-0 ceph-mon[74243]: from='client.? 192.168.122.10:0/4215877554' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:01:03 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15204 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:01:03 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15209 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:01:03 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 11 05:01:03 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3928019310' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 11 05:01:04 compute-0 ceph-mon[74243]: pgmap v1204: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:01:04 compute-0 ceph-mon[74243]: from='client.15204 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:01:04 compute-0 ceph-mon[74243]: from='client.15209 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:01:04 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3928019310' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 11 05:01:04 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct 11 05:01:04 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3654424337' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15215 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15217 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:01:04 compute-0 ceph-mgr[74542]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Oct 11 05:01:05 compute-0 ceph-mon[74243]: from='client.? 192.168.122.100:0/3654424337' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 11 05:01:05 compute-0 ceph-mon[74243]: from='client.15215 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:01:05 compute-0 ceph-mon[74243]: from='client.15217 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 05:01:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct 11 05:01:05 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1367518265' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 11 05:01:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:01:05 compute-0 ceph-mon[74243]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct 11 05:01:05 compute-0 ceph-mon[74243]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2653882532' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 11 05:01:05 compute-0 sudo[288704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 05:01:05 compute-0 sudo[288704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 05:01:05 compute-0 sudo[288704]: pam_unix(sudo:session): session closed for user root
Oct 11 05:01:05 compute-0 sudo[288761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 05:01:05 compute-0 sudo[288761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 05:01:05 compute-0 sudo[288761]: pam_unix(sudo:session): session closed for user root
Oct 11 05:01:05 compute-0 ceph-mgr[74542]: log_channel(audit) log [DBG] : from='client.15223 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
